Husky and Git Hooks Guards not respected in IDEs - reactjs

I have husky and lint-staged setup in a repository I am a main contributor to, and these guards consist of ensuring correct commitlint messages, proper branch naming patterns, and checking for linting errors on all commits.
This all works great when creating/committing/pushing branches from the command line, but when using any IDE's integrated Git UI, the checks are completely ignored, and anything can make it to the repo (incorrect commit messages, incorrect branch names, linting errors, etc.). Not sure why, nor do I know how to avoid this.
Here are some of the relevant pieces of code that is within this web of guards (obviously with irrelevant code omitted for clarity):
package.json:
"scripts": {
"lint": "eslint app/"
},
"devDependencies": {
"#commitlint/cli": "^16.2.3",
"#commitlint/config-conventional": "^16.2.1",
"enforce-branch-name": "^1.0.1"
},
...
"husky": {
"hooks": {
"commit-msg": "commitlint -E HUSKY_GIT_PARAMS",
"pre-push": "enforce-branch-name '^(branch-name-regex-rules)$'"
}
},
"lint-staged": {
"*.js": "npm run lint"
}
And then I have my .husky folder at the project root, with my commit-msg, pre-commit, and pre-push hook scripts:
commit-msg:
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"
npx --no -- commitlint --edit $1
pre-commit:
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"
npx lint-staged
pre-push:
#!/bin/sh
LC_ALL=C
local_branch="$(git rev-parse --abbrev-ref HEAD)"
valid_branch_regex="^(branch-name-regex-rules)$"
message="Error message for incorrect branch naming"
if [[ ! $local_branch =~ $valid_branch_regex ]]
then
echo "$message"
exit 1
fi
exit 0
I think that should cover all the relevant code pieces that I have. Like I said, when using the command line to make commits and push branches to the remote, these rules come into play and work, informing developers of incorrect patterns.
But any developer that uses like, VSCode's integrated Git feature to commit and/or push (or even just stage the files), none of the guards work, and it's as if someone just added --no-verify to a command line Git declaration to force the commit/push despite failing hooks/guards.
Not sure what I need to do to get husky and lint-staged to work appropriately for any method of staging files, making commits, branch naming, etc. etc.
Does anyone know what I'm doing wrong, or can point me somewhere that might explain what I'm doing wrong? Thanks in advance!

Related

Are the added dependencies really been compiled by shadow-cljs? If so, why do the values stay the same?

I am following shadow-cljs Quick Start documentation on a minimal example of a project. Here is the link.
Initially, I had this shadow-cljs.edn file:
;; shadow-cljs configuration
{:source-paths
["src/dev"
"src/main"
"src/test"]
:dev-http {8080 "public"}
:dependencies
[]
:builds
{:frontend
{:target :browser
:modules {:main {:init-fn acme.frontend.app/init}}
}}}
In /Users/pedro/projects/acme-app/src/main/acme/frontend/app.cljs, I also have:
(ns acme.frontend.app)
(defn init []
(println "Hello World"))
I can build and watch it with the command:
$ npx shadow-cljs compile frontend
shadow-cljs - config: /Users/pedro/projects/acme-app/shadow-cljs.edn
shadow-cljs - updating dependencies
shadow-cljs - dependencies updated
[:frontend] Compiling ...
[:frontend] Build completed. (79 files, 0 compiled, 0 warnings, 4.88s)
I have been adding dependencies such as:
:dependencies [[day8.re-frame/re-frame-10x "1.2.1"]
[proto-repl "0.3.1"]
[re-frame "1.2.0"]
[com.degel/re-frame-firebase "0.9.6-SNAPSHOT"]
[bidi "2.1.5"]
[re-com "2.13.2-106-180ea1f-SNAPSHOT-TALLYFOR"]
[com.andrewmcveigh/cljs-time "0.5.2"]
[com.pupeno/free-form "0.6.0"]
[binaryage/dirac "RELEASE"]
[hickory "0.7.1"]
[cljs-hash "0.0.2"]
[medley "1.2.0"]]
But, the build does not change in terms of files, compiled, and warnings. Just the time changes a bit - time is probably somewhat random/stochastic (79 files, 0 compiled, 0 warnings, 5.59s).
Are the dependencies really been compiled? How do I know if the dependencies were compiled too?
If they are being compiled, why does the number of files stay the same?
Obs.: note that I am not invoking the function being used inside the dependencies - and I do not want to invoke them for debugging reasons.
Adding the :dependencies does very little, they'll not be compiled on their own. They are only made available on the classpath.
They will only be compiled and loaded once you add them the :require in the ns forms of your files, or dynamically require at the REPL. Without an explicit request (ie. :require) to load them, they are just passive resources that are unused.

SpawnSync doesn't work when using electron-builder

I am writing an react-electron application and I noticed that when I used electron-builder to build it the binary was stuck when calling "spawn".
With "yarn start" the application can be executed without problems. Only with electron-builder it gets stuck.
Can you help ?
Thanks,
Update
It seems that the C++ binary included as part of the program can't be executed within electron. If I give the hardcoded full path to the binary it works but if I give the path from __dirname I get an error
const GetLocalPath = () => {
const path = __dirname + "/../cpp_program/"
return {
helloWorld: path+ "helloWorld",
helloWorldRepeat: path+ "helloWorldRepeat"
}
}
export function helloWorld(){
// let dir = "/Users/Rick/projects/lala/github/tutorial/electron-tutorial-app/cpp_program";
let comm = GetLocalPath().helloWorld;
The error message
internal/child_process.js:403 Uncaught (in promise) Error: spawn ENOTDIR
at ChildProcess.spawn (internal/child_process.js:403)
at Object.spawn (child_process.js:562)
at helloWorldRepeat (/Users/ricky/proje…ar/build/Lib.js:113)
at Object.<anonymous> (/Users/ricky/proje…sar/build/Lib.js:49)
at Generator.next (<anonymous>)
at /Users/ricky/proje…asar/build/Lib.js:9
at new Promise (<anonymous>)
at __awaiter (/Users/ricky/proje…asar/build/Lib.js:5)
at Object.handleInitialize (/Users/ricky/proje…sar/build/Lib.js:35)
at TestStateMachine.transition (/Users/ricky/proje…tStateMachine.js:56)
This is pretty odd because it works just fine with "yarn start", which is "tsc && electron"
package.json is shown below
"scripts": {
"start": "tsc && electron ."
},
"build": {
"appId": "com.example.myapp",
"productName": "MyApp",
"files": [
"build/**/*",
"public/**/*",
"src/images/**/*"
]
},
Update ver 2
Per Alexander's suggestion I have included
"asar": false
inside package.json
When I excute it I get a different error
Uncaught Error: spawn /Users/Rick/projects/lala/github/tutorial/electron-tutorial-app/dist/mac/MyApp.app/Contents/Resources/app/build/../cpp_program/helloWorldRepeat ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:269)
at onErrorNT (internal/child_process.js:465)
at processTicksAndRejections (internal/process/task_queues.js:80)
errnoException # internal/errors.js:510
ChildProcess._handle.onexit # internal/child_process.js:269
onErrorNT # internal/child_process.js:465
processTicksAndRejections # internal/process/task_queues.js:80
Now the error is that there is no "helloWorldRepeat" file inside /Users/Rick/projects/lala/github/tutorial/electron-tutorial-app/dist/mac/MyApp.app/Contents/Resources/app/build/../cpp_program/.
The binary is in fact located at
/Users/Rick/projects/lala/github/tutorial/electron-tutorial-app/build/../cpp_program/helloWorldRepeat
Do I have to manually create this folder and paste the binary files ?
By default, Electron Builder compiles your application and packs all resources into one large archive file (think of it as a ZIP file) which can be read just fine because Electron brings support for this format known as "ASAR".
When running the built program, the code will be read from the archive. This means that __dirname will point to a directory inside the archive. The operating system, however, cannot read from the archive. Since you did not actually include the piece of code calling child_process.spawn (), I can only speculate on why you get ENOTDIR, which hints that a given path is not a directory when it was expected to be one, but I assume this is because you point to a path inside the ASAR file.
When relying on external binaries, it is a good idea to keep them outside the ASAR archive and programmatically find the path to them (which is quite complex) or by preventing Electron Builder from compiling your app into an ASAR file. However, you would also have to ask Electron Builder to include the executable in the built version of your app. This can be done by modifying your package.json:
{
...
"build": {
"appId": "com.example.myapp",
"productName": "MyApp",
"files": [
"build/**/*",
"public/**/*",
"src/images/**/*"
],
"extraResources": [
"cpp_program/*"
]
"asar": false
},
}
(Replace "cpp_program/*" by whatever path pattern matches your desired directory, possibly even replacing /* with /**/* if there are subdirectories.)
This way, the directory cpp_program will be copied to your app's resources directory upon build. This path, according to Electron Builder's documentation, is Contents/Resources/ on MacOS. Thus, you will have to modify your path (__dirname + "../" will not work because it will point to Contents/Resources/app, but __dirname + "../../" should; if not, experimenting will lead to the correct path)*. Remember to run Electron Builder every time your C++ executable changes, as the files in the .app folder are not linked to their counterparts outside the built app.
* You can switch between development paths (__dirname + "../") and production paths (__dirname + "../../" or whatever) by checking if __dirname.includes (".app/")

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JS heap out of memory [duplicate]

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:
[md5:] 241613/241627 97.5%
[md5:] 241614/241627 97.5%
[md5:] 241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)
<--- Last few GCs --->
11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [/usr/bin/node]
2: 0xe2c5fc [/usr/bin/node]
3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
7: 0x3629ef50961b
Server is equipped with 16gb RAM and 24gb SSD swap. I highly doubt my script exceeded 36gb of memory. At least it shouldn't
Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)
Here's full script code:
http://pastebin.com/mjaD76c3
I've already experiend weird node issues in the past with this script what forced me eg. split index into multiple files as node was glitching when working on such big files as String. Is there any way to improve nodejs memory management with huge datasets?
If I remember correctly, there is a strict standard limit for the memory usage in V8 of around 1.7 GB, if you do not increase it manually.
In one of our products we followed this solution in our deploy script:
node --max-old-space-size=4096 yourFile.js
There would also be a new space command but as I read here: a-tour-of-v8-garbage-collection the new space only collects the newly created short-term data and the old space contains all referenced data structures which should be in your case the best option.
If you want to increase the memory usage of the node globally - not only single script, you can export environment variable, like this:
export NODE_OPTIONS=--max_old_space_size=4096
Then you do not need to play with files when running builds like
npm run build.
Just in case anyone runs into this in an environment where they cannot set node properties directly (in my case a build tool):
NODE_OPTIONS="--max-old-space-size=4096" node ...
You can set the node options using an environment variable if you cannot pass them on the command line.
Here are some flag values to add some additional info on how to allow more memory when you start up your node server.
1GB - 8GB
#increase to 1gb
node --max-old-space-size=1024 index.js
#increase to 2gb
node --max-old-space-size=2048 index.js
#increase to 3gb
node --max-old-space-size=3072 index.js
#increase to 4gb
node --max-old-space-size=4096 index.js
#increase to 5gb
node --max-old-space-size=5120 index.js
#increase to 6gb
node --max-old-space-size=6144 index.js
#increase to 7gb
node --max-old-space-size=7168 index.js
#increase to 8gb
node --max-old-space-size=8192 index.js
I just faced same problem with my EC2 instance t2.micro which has 1 GB memory.
I resolved the problem by creating swap file using this url and set following environment variable.
export NODE_OPTIONS=--max_old_space_size=4096
Finally the problem has gone.
I hope that would be helpful for future.
i was struggling with this even after setting --max-old-space-size.
Then i realised need to put options --max-old-space-size before the karma script.
also best to specify both syntaxes --max-old-space-size and --max_old_space_size my script for karma :
node --max-old-space-size=8192 --optimize-for-size --max-executable-size=8192 --max_old_space_size=8192 --optimize_for_size --max_executable_size=8192 node_modules/karma/bin/karma start --single-run --max_new_space_size=8192 --prod --aot
reference https://github.com/angular/angular-cli/issues/1652
I encountered this issue when trying to debug with VSCode, so just wanted to add this is how you can add the argument to your debug setup.
You can add it to the runtimeArgs property of your config in launch.json.
See example below.
{
"version": "0.2.0",
"configurations": [{
"type": "node",
"request": "launch",
"name": "Launch Program",
"program": "${workspaceRoot}\\server.js"
},
{
"type": "node",
"request": "launch",
"name": "Launch Training Script",
"program": "${workspaceRoot}\\training-script.js",
"runtimeArgs": [
"--max-old-space-size=4096"
]
}
]}
I had a similar issue while doing AOT angular build. Following commands helped me.
npm install -g increase-memory-limit
increase-memory-limit
Source: https://geeklearning.io/angular-aot-webpack-memory-trick/
I just want to add that in some systems, even increasing the node memory limit with --max-old-space-size, it's not enough and there is an OS error like this:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
In this case, probably is because you reached the max mmap per process.
You can check the max_map_count by running
sysctl vm.max_map_count
and increas it by running
sysctl -w vm.max_map_count=655300
and fix it to not be reset after a reboot by adding this line
vm.max_map_count=655300
in /etc/sysctl.conf file.
Check here for more info.
A good method to analyse the error is by run the process with strace
strace node --max-old-space-size=128000 my_memory_consuming_process.js
I've faced this same problem recently and came across to this thread but my problem was with React App. Below changes in the node start command solved my issues.
Syntax
node --max-old-space-size=<size> path-to/fileName.js
Example
node --max-old-space-size=16000 scripts/build.js
Why size is 16000 in max-old-space-size?
Basically, it varies depends on the allocated memory to that thread and your node settings.
How to verify and give right size?
This is basically stay in our engine v8. below code helps you to understand the Heap Size of your local node v8 engine.
const v8 = require('v8');
const totalHeapSize = v8.getHeapStatistics().total_available_size;
const totalHeapSizeGb = (totalHeapSize / 1024 / 1024 / 1024).toFixed(2);
console.log('totalHeapSizeGb: ', totalHeapSizeGb);
Steps to fix this issue (In Windows) -
Open command prompt and type %appdata% press enter
Navigate to %appdata% > npm folder
Open or Edit ng.cmd in your favorite editor
Add --max_old_space_size=8192 to the IF and ELSE block
Your node.cmd file looks like this after the change:
#IF EXIST "%~dp0\node.exe" (
"%~dp0\node.exe" "--max_old_space_size=8192" "%~dp0\node_modules\#angular\cli\bin\ng" %*
) ELSE (
#SETLOCAL
#SET PATHEXT=%PATHEXT:;.JS;=;%
node "--max_old_space_size=8192" "%~dp0\node_modules\#angular\cli\bin\ng" %*
)
Recently, in one of my project ran into same problem. Tried couple of things which anyone can try as a debugging to identify the root cause:
As everyone suggested , increase the memory limit in node by adding this command:
{
"scripts":{
"server":"node --max-old-space-size={size-value} server/index.js"
}
}
Here size-value i have defined for my application was 1536 (as my kubernetes pod memory was 2 GB limit , request 1.5 GB)
So always define the size-value based on your frontend infrastructure/architecture limit (little lesser than limit)
One strict callout here in the above command, use --max-old-space-size after node command not after the filename server/index.js.
If you have ngnix config file then check following things:
worker_connections: 16384 (for heavy frontend applications)
[nginx default is 512 connections per worker, which is too low for modern applications]
use: epoll (efficient method) [nginx supports a variety of connection processing methods]
http: add following things to free your worker from getting busy in handling some unwanted task. (client_body_timeout , reset_timeout_connection , client_header_timeout,keepalive_timeout ,send_timeout).
Remove all logging/tracking tools like APM , Kafka , UTM tracking, Prerender (SEO) etc middlewares or turn off.
Now code level debugging: In your main server file , remove unwanted console.log which is just printing a message.
Now check for every server route i.e app.get() , app.post() ... below scenarios:
data => if(data) res.send(data) // do you really need to wait for data or that api returns something in response which i have to wait for?? , If not then modify like this:
data => res.send(data) // this will not block your thread, apply everywhere where it's needed
else part: if there is no error coming then simply return res.send({}) , NO console.log here.
error part: some people define as error or err which creates confusion and mistakes. like this:
`error => { next(err) } // here err is undefined`
`err => {next(error) } // here error is undefined`
`app.get(API , (re,res) =>{
error => next(error) // here next is not defined
})`
remove winston , elastic-epm-node other unused libraries using npx depcheck command.
In the axios service file , check the methods and logging properly or not like :
if(successCB) console.log("success") successCB(response.data) // here it's wrong statement, because on success you are just logging and then `successCB` sending outside the if block which return in failure case also.
Save yourself from using stringify , parse etc on accessive large dataset. (which i can see in your above shown logs too.
Last but not least , for every time when your application crashes or pods restarted check the logs. In log specifically look for this section: Security context
This will give you why , where and who is the culprit behind the crash.
I will mention 2 types of solution.
My solution : In my case I add this to my environment variables :
export NODE_OPTIONS=--max_old_space_size=20480
But even if I restart my computer it still does not work. My project folder is in d:\ disk. So I remove my project to c:\ disk and it worked.
My team mate's solution : package.json configuration is worked also.
"start": "rimraf ./build && react-scripts --expose-gc --max_old_space_size=4096 start",
For other beginners like me, who didn't find any suitable solution for this error, check the node version installed (x32, x64, x86). I have a 64-bit CPU and I've installed x86 node version, which caused the CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory error.
if you want to change the memory globally for node (windows) go to advanced system settings -> environment variables -> new user variable
variable name = NODE_OPTIONS
variable value = --max-old-space-size=4096
You can also change Window's environment variables with:
$env:NODE_OPTIONS="--max-old-space-size=8192"
Unix (Mac OS)
Open a terminal and open our .zshrc file using nano like so (this will create one, if one doesn't exist):
nano ~/.zshrc
Update our NODE_OPTIONS environment variable by adding the following line into our currently open .zshrc file:
export NODE_OPTIONS=--max-old-space-size=8192 # increase node memory limit
Please note that we can set the number of megabytes passed in to whatever we like, provided our system has enough memory (here we are passing in 8192 megabytes which is roughly 8 GB).
Save and exit nano by pressing: ctrl + x, then y to agree and finally enter to save the changes.
Close and reopen the terminal to make sure our changes have been recognised.
We can print out the contents of our .zshrc file to see if our changes were saved like so: cat ~/.zshrc.
Linux (Ubuntu)
Open a terminal and open the .bashrc file using nano like so:
nano ~/.bashrc
The remaining steps are similar with the Mac steps from above, except we would most likely be using ~/.bashrc by default (as opposed to ~/.zshrc). So these values would need to be substituted!
Link to Nodejs Docs
Use the option --optimize-for-size. It's going to focus on using less ram.
I had this error on AWS Elastic Beanstalk, upgrading instance type from t3.micro (Free tier) to t3.small fixed the error
In my case, I upgraded node.js version to latest (version 12.8.0) and it worked like a charm.
Upgrade node to the latest version. I was on node 6.6 with this error and upgraded to 8.9.4 and the problem went away.
For Angular, this is how I fixed
In Package.json, inside script tag add this
"scripts": {
"build-prod": "node --max_old_space_size=5048 ./node_modules/#angular/cli/bin/ng build --prod",
},
Now in terminal/cmd instead of using ng build --prod just use
npm run build-prod
If you want to use this configuration for build only just remove --prod from all the 3 places
I experienced the same problem today. The problem for me was, I was trying to import lot of data to the database in my NextJS project.
So what I did is, I installed win-node-env package like this:
yarn add win-node-env
Because my development machine was Windows. I installed it locally than globally. You can install it globally also like this: yarn global add win-node-env
And then in the package.json file of my NextJS project, I added another startup script like this:
"dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev"
Here, am passing the node option, ie. setting 8GB as the limit.
So my package.json file somewhat looks like this:
{
"name": "my_project_name_here",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "next dev",
"dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev",
"build": "next build",
"lint": "next lint"
},
......
}
And then I run it like this:
yarn dev_more_mem
For me, I was facing the issue only on my development machine (because I was doing the importing of large data). Hence this solution. Thought to share this as it might come in handy for others.
I had the same issue in a windows machine and I noticed that for some reason it didn't work in git bash, but it was working in power shell
Just in case it may help people having this issue while using nodejs apps that produce heavy logging, a colleague solved this issue by piping the standard output(s) to a file.
If you are trying to launch not node itself, but some other soft, for example webpack you can use the environment variable and cross-env package:
$ cross-env NODE_OPTIONS='--max-old-space-size=4096' \
webpack --progress --config build/webpack.config.dev.js
For angular project bundling, I've added the below line to my pakage.json file in the scripts section.
"build-prod": "node --max_old_space_size=5120 ./node_modules/#angular/cli/bin/ng build --prod --base-href /"
Now, to bundle my code, I use npm run build-prod instead of ng build --requiredFlagsHere
hope this helps!
If any of the given answers are not working for you, check your installed node if it compatible (i.e 32bit or 64bit) to your system. Usually this type of error occurs because of incompatible node and OS versions and terminal/system will not tell you about that but will keep you giving out of memory error.
None of all these every single answers worked for me (I didn't try to update npm tho).
Here's what worked: My program was using two arrays. One that was parsed on JSON, the other that was generated from datas on the first one. Just before the second loop, I just had to set my first JSON parsed array back to [].
That way a loooooot of memory is freed, allowing the program to continue execution without failing memory allocation at some point.
Cheers !
You can fix a "heap out of memory" error in Node.js by below approaches.
Increase the amount of memory allocated to the Node.js process by using the --max-old-space-size flag when starting the application. For example, you can increase the limit to 4GB by running node --max-old-space-size=4096 index.js.
Use a memory leak detection tool, such as the Node.js heap dump module, to identify and fix memory leaks in your application. You can also use the node inspector and use chrome://inspect to check memory usage.
Optimize your code to reduce the amount of memory needed. This might involve reducing the size of data structures, reusing objects instead of creating new ones, or using more efficient algorithms.
Use a garbage collector (GC) algorithm to manage memory automatically. Node.js uses the V8 engine's garbage collector by default, but you can also use other GC algorithms such as the Garbage Collection in Node.js
Use a containerization technology like Docker which limits the amount of memory available to the container.
Use a process manager like pm2 which allows to automatically restart the node application if it goes out of memory.

rpm and Yum don't believe a package is installed after Chef installs

Running chef-solo (Installing Chef Omnibus (12.3)) on centos6.6
My recipe has the following simple code:
package 'cloud-init' do
action :install
end
log 'rpm-qi' do
message `rpm -qi cloud-init`
level :warn
end
log 'yum list' do
message `yum list cloud-init`
level :warn
end
But it outputs the following:
- install version 0.7.5-10.el6.centos.2 of package cloud-init
* log[rpm-qi] action write[2015-07-16T16:46:35+00:00] WARN: package cloud-init is not installed
[2015-07-16T16:46:35+00:00] WARN: Loaded plugins: fastestmirror, presto
Available Packages
cloud-init.x86_64 0.7.5-10.el6.centos.2 extras
I am at a loss as to why rpm/yum and actually rpmquery don't see the package as installed.
EDIT: To clarify I am specifically looking for the following string post package install to then apply a change to the file (I understand this is not a very chef way to do something I am happy to accept suggestions):
rpmquery -l cloud-init | grep 'distros/__init__.py$'
I have found that by using the following:
install_report = shell_out('yum install -y cloud-init').stdout
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
I can then get the file I am looking for and perform
Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
The file moves based on the distribution but I need to edit that file specifically with in place changes.
Untested code, just to give the idea:
package 'cloud-init' do
action :install
notifies :run,"ruby_block[update_cloud_init]"
end
ruby_block 'update_cloud_init' do
block do
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
rc = Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
rc.search_file_replace_line(/^what to find$/,
"replacement datas for the line")
rc.write_file
end
end
ruby_block example taken and adapted from here
I would better go using a template to manage the whole file, what I don't understand is why you don't know where it will be at first...
Previous answer
I assume it's a compile vs converge problem. at the time the message is stored (and so your command is executed) the package is not already installed.
Chef run in two phase, compile then converge.
At compile time it build a collection of resources and at converge time it execute code for the resource to get them in the described state.
When your log resource is compiled, the ugly back-ticks are evaluated, at this time there's a package resource in the collection but the resource has not been executed, so the output is correct.
I don't understand what you want to achieve with those log resources at all.
If you want to test your node state after chef-run use a handler maybe calling ServerSpec as in Test-Kitchen.

During apachectl start getting open shared object file: No such file or directory

After successfully installation of Apache2(2.4.4) i tried to start https server but i am getting below error
bimlesh#server:/usr/local/apache2/bin$ ./apachectl start
httpd: Syntax error on line 71 of /usr/local/apache2/conf/httpd.conf: Cannot load modules/mod_authn_core.so into server: /usr/local/apache2/modules/mod_authn_core.so: cannot open shared object file: No such file or directory
bimlesh#server:/usr/local/apache2/bin$
I looked at /usr/local/apache2/modules/ and really those .so files are not available.
Can anyone please help that how to get rid off.
if i look at /usr/local/apache2/modules/ folder then i see:(no .so files available)
bimlesh#server:/usr/local/apache2/bin$ ls ../modules/
httpd.exp mod_authn_file.a mod_cache_disk.a mod_file_cache.a mod_logio.la mod_ratelimit.a mod_socache_dbm.la
mod_access_compat.a mod_authn_file.la mod_cache_disk.la mod_file_cache.la mod_mime.a mod_ratelimit.la mod_socache_memcache.a
mod_access_compat.la mod_authn_socache.a mod_cache.la mod_filter.a mod_mime.la mod_remoteip.a mod_socache_memcache.la
mod_actions.a mod_authn_socache.la mod_cgid.a mod_filter.la mod_negotiation.a mod_remoteip.la mod_socache_shmcb.a
mod_actions.la mod_authz_core.a mod_cgid.la mod_headers.a mod_negotiation.la mod_reqtimeout.a mod_socache_shmcb.la
mod_alias.a mod_authz_core.la mod_dav.a mod_headers.la mod_proxy.a mod_reqtimeout.la mod_speling.a
mod_alias.la mod_authz_dbd.a mod_dav_fs.a mod_include.a mod_proxy_ajp.a mod_request.a mod_speling.la
mod_allowmethods.a mod_authz_dbd.la mod_dav_fs.la mod_include.la mod_proxy_ajp.la mod_request.la mod_status.a
mod_allowmethods.la mod_authz_dbm.a mod_dav.la mod_info.a mod_proxy_balancer.a mod_rewrite.a mod_status.la
mod_auth_basic.a mod_authz_dbm.la mod_dbd.a mod_info.la mod_proxy_balancer.la mod_rewrite.la mod_substitute.a
mod_auth_basic.la mod_authz_groupfile.a mod_dbd.la mod_lbmethod_bybusyness.a mod_proxy_connect.a mod_sed.a mod_substitute.la
mod_auth_digest.a mod_authz_groupfile.la mod_deflate.a mod_lbmethod_bybusyness.la mod_proxy_connect.la mod_sed.la mod_unique_id.a
mod_auth_digest.la mod_authz_host.a mod_deflate.la mod_lbmethod_byrequests.a mod_proxy_express.a mod_session.a mod_unique_id.la
mod_auth_form.a mod_authz_host.la mod_dir.a mod_lbmethod_byrequests.la mod_proxy_express.la mod_session_cookie.a mod_unixd.a
mod_auth_form.la mod_authz_owner.a mod_dir.la mod_lbmethod_bytraffic.a mod_proxy_fcgi.a mod_session_cookie.la mod_unixd.la
mod_authn_anon.a mod_authz_owner.la mod_dumpio.a mod_lbmethod_bytraffic.la mod_proxy_fcgi.la mod_session_dbd.a mod_userdir.a
mod_authn_anon.la mod_authz_user.a mod_dumpio.la mod_lbmethod_heartbeat.a mod_proxy_ftp.a mod_session_dbd.la mod_userdir.la
mod_authn_core.a mod_authz_user.la mod_env.a mod_lbmethod_heartbeat.la mod_proxy_ftp.la mod_session.la mod_version.a
mod_authn_core.la mod_autoindex.a mod_env.la mod_log_config.a mod_proxy_http.a mod_setenvif.a mod_version.la
mod_authn_dbd.a mod_autoindex.la mod_expires.a mod_log_config.la mod_proxy_http.la mod_setenvif.la mod_vhost_alias.a
mod_authn_dbd.la mod_buffer.a mod_expires.la mod_log_debug.a mod_proxy.la mod_slotmem_shm.a mod_vhost_alias.la
mod_authn_dbm.a mod_buffer.la mod_ext_filter.a mod_log_debug.la mod_proxy_scgi.a mod_slotmem_shm.la
mod_authn_dbm.la mod_cache.a mod_ext_filter.la mod_logio.a mod_proxy_scgi.la mod_socache_dbm.a
bimlesh#server:/usr/local/apache2/bin$
Run
find / -type f -name mod_authn_core.so
or install updatedb ( mlocate, slocate, findutils or sth ) if needed and run
updatedb
and then ( or before )
locate mod_authn_core.so
To find out if these files are somewere else than they should be, and possibly fix location with symbolic link or moving files where they're expected to be.
If there is no file you need, you may need to comment it in httpd.conf ( if it's specific module ), or (re)install apache package(s). I believe mod_authn_core should be in basic package, not in separate module though. Possibly someone removed it blindly or accidentally, or some intruder messed up with system, or disk got broken or whatever else.
PS. Modules usually are under "lib" e.g. /usr/local/lib/apache2 or /usr/lib/apache2, or /usr/lib/apache2/modules or similar, not in /usr/local/apache2/modules, though it usually depends on compilation of package..
You might also run
apache2ctl -t -D DUMP_VHOSTS
to find out what modules were compiled as shared or static. You should also include information about distribution, and note you're building/installing from source.
Also, have look here: http://httpd.apache.org/docs/2.4/install.html#configure

Resources