memory leak detected when running on linux: - angularjs

I tried to increase the no.of listeners using Event Emitter. But it's not working. The same is running with no warnings in windows.
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at EventEmitter.addListener (events.js:160:15)
at Server.connect (/dir/node_modules/mongoose/node_modules/mongodb/lib/server.js:291:17)
at Db.open (/dir/node_modules/mongoose/node_modules/mongodb/lib/db.js:190:19)
at MongoStore._open_database (/dir/node_modules/connect-mongo/lib/connect-mongo.js:182:15)
at MongoStore._get_collection (/dir/node_modules/connect-mongo/lib/connect-mongo.js:177:14)
at /dir/node_modules/connect-mongo/lib/connect-mongo.js:194:16
at /dir/node_modules/mongoose/node_modules/mongodb/lib/db.js:200:5
at connectHandler (/dir/node_modules/mongoose/node_modules/mongodb/lib/server.js:272:7)
at g (events.js:180:16)
at EventEmitter.emit (events.js:95:17)

Solved...
Modifying max no.of listeners did not work.
The problem is with new versions of mongoose/mongodb.
When i saw mongodb server, i noticed that connections are created to the DB in a continuous loop (may be due to some problem with new ones).
I switched them back to the previous versions in package.json, cleared cache and installed dependencies again. Now its working.

Related

[flink]Task manager initialization failed

I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E

Android Manifest merger error in Codename One

In a bare bones project, I added these build hints:
android.gradleDep=compile 'com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5'
android.min_sdk_version=23
I would like to import the following Android library to make a CN1Lib (that requires at least Android SDK 23):
https://github.com/erikagtierrez/multiple-media-picker
To be short: I spent one day trying to import that, I also experimented with Android Studio and with suggestions found on Stack Overflow (trying to make a custom .aar), without success.
Could you help me to import that library? There is manifest merger error.
In fact, the issue reported by the build server is:
* What went wrong:
Execution failed for task ':processReleaseManifest'.
> Manifest merger failed : Attribute application#label value=(BareBones) from AndroidManifest.xml:15:17-42
is also present at [com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5] AndroidManifest.xml:23:9-41 value=(#string/app_name).
Suggestion: add 'tools:replace="android:label"' to <application> element at AndroidManifest.xml:15:3-43:103 to override.
I also tried to add the build hint:
android.xapplication_attr=tools:replace="android:label"
as suggested by the previous error, without success.
In the last case, I get:
Merging result: ERROR
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml:15:3-43:103 Error:
tools:replace specified at line:15 for attribute android:label, but no new value specified
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml Error:
Validation failed, exiting
-- Merging decision tree log ---
The last full log is here: https://gist.github.com/jsfan3/dd6c23f86a2ac949f996910c8cece62b
Thank you
This is happening because our code things you injected android:label on your own and doesn't inject it to avoid collision...
Change the code to this:
android.xapplication_attr=tools:replace="android:label" android:label="App Name"

Flink1.5.4 exception: Corrupt stream, found tag: 105

My program wants to join two streams without Flink Window.
I connect two streams and define a class A extends RichCoFlatMapFunction to handle them.
In class A, I use a Guava cache to hold all the data from flatmap1/2 method, and join them by a tag from streams.
Then Guava cache has a remove listener to collect joined&expired data to next Flink Function.
private synchronized void collect(ReqFeatures features) {
feaCollector.collect(features);
}
Each time at the beginning, it runs well, but a few hours later, it's always dead because of this exception.
java.io.IOException: Corrupt stream, found tag: 105
at org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:220)
at org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:49)
at org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
at org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:106)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:172)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:104)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
at java.lang.Thread.run(Thread.java:748)
And sometimes there's another error log:
java.lang.IllegalStateException: When there are multiple buffers, an unfinished bufferConsumer can not be at the head of the buffers queue.
at org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
at org.apache.flink.runtime.io.network.partition.PipelinedSubpartition.pollBuffer(PipelinedSubpartition.java:158)
at org.apache.flink.runtime.io.network.partition.PipelinedSubpartitionView.getNextBuffer(PipelinedSubpartitionView.java:51)
at org.apache.flink.runtime.io.network.partition.consumer.LocalInputChannel.getNextBuffer(LocalInputChannel.java:186)
at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:551)
at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:508)
at org.apache.flink.streaming.runtime.io.BarrierTracker.getNextNonBlocked(BarrierTracker.java:94)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:104)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
at java.lang.Thread.run(Thread.java:748)
If I use Flink Window Function instead, this exception doesn't occur.
Why does this exception occur, and how can I resolve it?
I can confirm this also happens in Flink 1.9.1 (albeit for us, it happens when we run flink stop <job-id>)
I fixed the same problem with getting checkpointing lock while collecting output. The users flatMap function already hold the checkpointing lock, so if u collect output in flatMap function could also fix this problem.
in flink's code:
synchronized (checkpointingLock) {
numRecordsIn.inc();
streamOperator.setKeyContextElement1(record);
streamOperator.processElement(record);
}

ASIHTTPRequest misses files

I'm using ASIHTTPRequest to download a list of 30 files but 2 or 3 (different) are always lost.
Is it possible set the maximum number of connections per seconds?? I've tried with
- [[ASIHTTPRequest sharedQueue] setMaxConcurrentOperationCount:1];
- [cola setMaxConcurrentOperationCount:1];
But i don't have any luck...
Any help?
Thank you
I've solved this problem with:
[request setPersistentConnectionTimeoutSeconds:80];
[request setShouldAttemptPersistentConnection:NO];
The problem may be that Apache installed doesnt support persistent connections.
See Configuring persistent connections section in http://allseeing-i.com/ASIHTTPRequest/How-to-use for more info.

Tomcat cluster fails and generates tons of logs

Periodically, I'm getting problems with my Tomcat 6 cluster (2 nodes). One of the nodes would just go haywire and generate a ton of logs repeating the following:
Aug 25, 2009 11:44:10 AM org.apache.catalina.ha.session.DeltaRequest reset
SEVERE: Unable to remove element
java.util.NoSuchElementException
at java.util.LinkedList.remove(LinkedList.java:788)
at java.util.LinkedList.removeFirst(LinkedList.java:134)
at org.apache.catalina.ha.session.DeltaRequest.reset(DeltaRequest.java:201)
at org.apache.catalina.ha.session.DeltaRequest.execute(DeltaRequest.java:195)
at org.apache.catalina.ha.session.DeltaManager.handleSESSION_DELTA(DeltaManager.java:1364)
at org.apache.catalina.ha.session.DeltaManager.messageReceived(DeltaManager.java:1320)
at org.apache.catalina.ha.session.DeltaManager.messageDataReceived(DeltaManager.java:1083)
at org.apache.catalina.ha.session.ClusterSessionListener.messageReceived(ClusterSessionListener.java:87)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:916)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:897)
at org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:264)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:110)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:241)
at org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:225)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:188)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:91)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
That's the only thing that it shows. The other node in the cluster is still active at this time. There's nothing to do but to restart. The large amount of logs has caused disk space issues more than a couple of times too.
Does anybody have any idea what's wrong here?
Thanks!
Wong
Appears to be a bug in Tomcat 6. If you look at the source at:
http://www.java2s.com/Open-Source/Java-Document/Sevlet-Container/apache-tomcat-6.0.14/org/apache/catalina/ha/session/DeltaRequest.java.htm (line 225)
you'll see that the reset() method can potentially throw this exception. I suggest that you contact the Tomcat developers regarding this issue.

Resources