App engine shutting down and starting up after ~10 minutes - google-app-engine

For some reason app engine shuts down sometimes (so far it has happened 3 times on different dates) and then starts up after 10 minutes (manually stopping and starting the instance takes ~1min, so this seems very long).
There are no errors before or other logs that indicate a cause for the shutdown (there's neither an ah/stop that comes before the shutdown).
The application is run in manual scaling mode with 1 instance on standard app engine. The application is most of the time "idle" (it does run things but doesn't receive requests.) The runtime is java11.
What could be the reason?
logs before shutdown
2021-06-21 23:45:25.372 EDT
[start] 2021/06/22 03:45:25.371929 Quitting on terminated signal
Default
2021-06-21 23:45:25.373 EDT
I0622 03:45:25.373024 24 statistician.cc:113] Statistics of class_prepare_time_micros: mean = 10.4196, stdev = 31.173, min = 2, max = 1046, samples = 3203
Default
2021-06-21 23:45:25.710 EDT
I0622 03:45:25.381978 1340 jvmti_agent.cc:222] Java VM termination
Default
2021-06-21 23:45:25.711 EDT
I0622 03:45:25.384263 29 jvmti_agent_thread.cc:99] Agent thread exited: CloudDebugger_main_worker_thread
Default
2021-06-21 23:45:25.711 EDT
I0622 03:45:25.387276 1340 worker.cc:113] Debugger threads terminated
Default
2021-06-21 23:45:25.712 EDT
I0622 03:45:25.387315 1340 jvmti_agent.cc:236] JvmtiAgent::JvmtiOnVMDeath cleanup time: 5348 microseconds
Default
2021-06-21 23:45:25.811 EDT
[start] 2021/06/22 03:45:25.810839 Start program failed: termination triggered by nginx exit
Info
2021-06-21 23:54:58.069 EDT
GET
200
111 B
5.896 s
/_ah/start
init logs
2021-06-21 23:45:25.372 EDT
[start] 2021/06/22 03:45:25.371929 Quitting on terminated signal
Default
2021-06-21 23:45:25.811 EDT
[start] 2021/06/22 03:45:25.810839 Start program failed: termination triggered by nginx exit
Default
2021-06-21 23:54:58.440 EDT
[start] 2021/06/22 03:54:58.438391 Starting app
Default
2021-06-21 23:54:58.441 EDT
[start] 2021/06/22 03:54:58.440653 Executing: /bin/sh -c exec serve /workspace/ingest.jar
Default
2021-06-21 23:54:58.445 EDT
[start] 2021/06/22 03:54:58.445422 Waiting for network connection open. Subject:"app/invalid" Address:127.0.0.1:8080
Default
2021-06-21 23:54:58.446 EDT
[start] 2021/06/22 03:54:58.445915 Waiting for network connection open. Subject:"app/valid" Address:127.0.0.1:8081
Default
2021-06-21 23:54:58.483 EDT
[serve] 2021/06/22 03:54:58.482876 Serve started.
Default
2021-06-21 23:54:58.485 EDT
[serve] 2021/06/22 03:54:58.483776 Args: {runtimeLanguage:java runtimeName:java11 memoryMB:512 positional:[/workspace/ingest.jar]}
Default
2021-06-21 23:54:58.487 EDT
[serve] 2021/06/22 03:54:58.486069 Running /bin/sh -c exec java -agentpath:/opt/cdbg/cdbg_java_agent.so=--log_dir=/var/log -jar /workspace/ingest.jar
Default
2021-06-21 23:54:59.702 EDT
[start] 2021/06/22 03:54:59.701720 Wait successful. Subject:"app/valid" Address:127.0.0.1:8081 Attempts:251 Elapsed:1.255602687s
Default
2021-06-21 23:54:59.702 EDT
[start] 2021/06/22 03:54:59.701951 Starting nginx
Default
2021-06-21 23:54:59.711 EDT
[start] 2021/06/22 03:54:59.710592 Waiting for network connection open. Subject:"nginx" Address:127.0.0.1:8080
Default
2021-06-21 23:54:59.753 EDT
[start] 2021/06/22 03:54:59.745519 Wait successful. Subject:"nginx" Address:127.0.0.1:8080 Attempts:5 Elapsed:33.709637ms
an expanded init log
Thanks!

If no requests come in for a while, GAE will shut down. If you would like to keep your instances running, you have to consider to configure min_idle_instances. See this document.

Related

How to start systemctl service in Ubuntu 16.10 with simple Daemon C code

I write simple C code
#include<stdio.h>
#include<sys/types.h>
#include<stdlib.h>
#include<unistd.h>
#include <sys/stat.h>
int main(){
pid_t pid;
pid=fork();
if(pid>0){
exit(1);
}
FILE *fp;
fp=fopen("pid.pid","a");
fprintf(fp,"%d",getpid());
fclose(fp);
printf("\npid = %d\n",pid);
printf("\ngetpid = %d\n",getpid());
puts("\nAfter fclose() \n");
umask(0);
while(1){}
return 0;
}
and Daemon1.service
[Units]
Description=Socket programming with Daemon
[Service]
User=root
Type=forking
WorkingDirectory=/Omkar/Doc/systemctl/
ExecStart=/Omkar/Doc/systemctl/main
Restart=always
PIDFile=/Omkar/Doc/systemctl/pid.pid
[Install]
WantedBy=multi-user.target
and stored at location
/etc/systemd/system
after this i run command
systemctl daemon-reload
systemctl enable Daemon1.service
systemctl start Daemon1.service
Then i got error
Job for Daemon1.service failed because the control process exited with error code.
See "systemctl status Daemon1.service" and "journalctl-xe" for details.
Then I check status of service with this command
systemctl status Daemon1.service
then i got this
● Daemon1.service
Loaded: loaded (/etc/systemd/system/Daemon1.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Tue 2019-11-19 18:21:26 IST; 3min 26s ago
Process: 5868 ExecStart=/Omkar/Doc/systemctl/main (code=exited, status=1/FAILURE)
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Failed to start Daemon1.service.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Daemon1.service: Unit entered failed state.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Daemon1.service: Failed with result 'exit-code'.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Daemon1.service: Service hold-off time over, scheduling restart.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Stopped Daemon1.service.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Daemon1.service: Start request repeated too quickly.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Failed to start Daemon1.service.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Daemon1.service: Unit entered failed state.
Nov 19 18:21:26 pt32-H81M-S systemd[1]: Daemon1.service: Failed with result 'start-limit-hit'.
My service is not running. What should i need to change so my code will work.
I give executable file of C code to ExecStart= in Daemon1.service
There is line in your output that is giving you a not so subtle hint: (code=exited, status=1/FAILURE)
● Daemon1.service
Loaded: loaded (/etc/systemd/system/Daemon1.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Tue 2019-11-19 18:21:26 IST; 3min 26s ago
Process: 5868 ExecStart=/Omkar/Doc/systemctl/main (code=exited, status=1/FAILURE)
Modify your code return 0 instead of 1 to the OS after forking.
if(pid>0){
exit(0);
}
You should be back in business to move forward after that small adjustment:
# systemctl status Daemon1.service
● Daemon1.service
Loaded: loaded (/etc/systemd/system/Daemon1.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-11-19 08:49:33 CST; 5s ago
Process: 20484 ExecStart=/root/stackoverflow/Daemon1 (code=exited, status=0/SUCCESS)
Main PID: 20486 (Daemon1)
CGroup: /system.slice/Daemon1.service
└─20486 /root/stackoverflow/Daemon1
Nov 19 08:49:33 lm systemd[1]: Starting Daemon1.service...
Nov 19 08:49:33 lm systemd[1]: Started Daemon1.service.

Cannot sync with the NTP server

I am using lubuntu Linux 18.04 Bionic. When I print ntpq -pn I cannot see that my computer is synced with my desired NTP server.
I have tried several tutorials like here: LINK. I took the NTP servers from Google HERE and included the all 4 servers to my config file.
Then, I did the following things in order to sync with one of the Google NTP servers:
sudo service stop
sudo service ntpdate time1.google.com and received a log ntpdate[2671]: adjust time server 216.239.35.0 offset -0.000330 sec
sudo service start
Here is my /etc/ntp.conf file:
driftfile /var/lib/ntp/ntp.drift
leapfile /usr/share/zoneinfo/leap-seconds.list
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
restrict -4 default kod notrap nomodify nopeer noquery limited
restrict -6 default kod notrap nomodify nopeer noquery limited
restrict 127.0.0.1
restrict ::1
restrict source notrap nomodify noquery
server time1.google.com iburst
server time2.google.com iburst
server time3.google.com iburst
server time4.google.com iburst
After doing the steps above, I got this result from ntpq -pn:
remote refid st t when poll reach delay offset jitter
+216.239.35.0 .GOOG. 1 u 33 64 1 36.992 0.519 0.550
+216.239.35.4 .GOOG. 1 u 32 64 1 20.692 0.688 0.612
*216.239.35.8 .GOOG. 1 u 36 64 1 22.233 0.088 1.091
-216.239.35.12 .GOOG. 1 u 32 64 1 33.480 -0.218 1.378
Why my computer is not synced?
EDIT:
Here is my log output after sudo systemctl status ntp.service:
ntp.service - Network Time Service
Loaded: loaded (/lib/systemd/system/ntp.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-01-17 11:37:33 CET; 17min ago
Docs: man:ntpd(8)
Process: 2704 ExecStart=/usr/lib/ntp/ntp-systemd-wrapper (code=exited, status=0/SUCCESS)
Main PID: 2712 (ntpd)
CGroup: /system.slice/ntp.service
└─2712 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:108
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: proto: precision = 1.750 usec (-19)
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, e
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: Listen and drop on 0 v6wildcard [::]:123
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: Listen normally on 2 lo 127.0.0.1:123
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: Listen normally on 3 wlan0 192.168.86.26:123
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: Listen normally on 4 lo [::1]:123
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: Listen normally on 5 wlan0 [fe80::71d6:ec6e:fa92:b53%4]:123
Jan 17 11:37:33 ELAR-Systems ntpd[2712]: Listening on routing socket on fd #22 for interface updates
Your system time actually is getting synced but is running off very quick.
The Raspberry Pi, Arduino, Asus Tinker and the other PCB single-board computers have no onboard RTC (real time clock) and no battery to keep it up constantly. It has nothing to do with RAM or current, but simply the fact that there is no hardware clock on the computer.
On my raspberry pi, the time went off several minutes within an hour.
The "software clock" on the computer is impacted by system load and is very unstable.
An RTC extension (for RPI) looks like this:
(Source: www.robotshop.com)

Controlling a C daemon from another program

I'm trying to control a C daemon program from another userspace program.
- Simple C daemon
This daemon is simply a C program which daemonize itself and log a message every second through syslog.
#include <stdlib.h>
#include <stdio.h>
#include <syslog.h>
#include <unistd.h>
#include <signal.h>
void bye();
int main()
{
printf("Daemon starting ...\n");
openlog("daemon-test", LOG_PID, LOG_DAEMON);
signal(SIGTERM, bye);
if(0 != daemon(0, 0))
{
syslog(LOG_ERR, "Can't daemonize\n");
return EXIT_FAILURE;
}
syslog(LOG_INFO, "Daemon started !\n");
while(1)
{
syslog(LOG_INFO, "Daemon alive\n");
sleep(1);
}
return EXIT_SUCCESS;
}
void bye()
{
syslog(LOG_INFO, "Daemon killed !\n");
exit(EXIT_SUCCESS);
}
- Launching and killing the daemon from a C test program
For test purpose I have developped a minimal example. I'm using popen to launch the daemon because I want my program to continue its execution.
After 5 seconds, the test program is supposed to kill the daemon.
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#define DAEMON_NAME "daemon-test"
int main()
{
FILE* pipe = NULL;
int i = 0;
printf("Launching '%s' program\n", DAEMON_NAME);
if(NULL == (pipe = popen(DAEMON_NAME, "re")))
{
printf("An error occured launching '%s': %m\n", DAEMON_NAME);
return EXIT_FAILURE;
}
printf("Program '%s' launched\n", DAEMON_NAME);
while(i<5)
{
printf("Program alive !\n");
sleep(1);
i++;
}
if(NULL == (pipe = popen("killall " DAEMON_NAME, "re")))
{
printf("An error occured killing '%s' program: %m\n", DAEMON_NAME);
return EXIT_FAILURE;
}
printf("Program '%s' killed\n", DAEMON_NAME);
return EXIT_SUCCESS;
}
Test program log:
$ ./popenTest
Launching 'daemon-test' program
Program 'daemon-test' launched
Program alive !
Program alive !
Program alive !
Program alive !
Program alive !
Program 'daemon-test' killed
Syslog:
Jun 25 13:58:15 PC325 daemon-test[4445]: Daemon started !
Jun 25 13:58:15 PC325 daemon-test[4445]: Daemon alive
Jun 25 13:58:16 PC325 daemon-test[4445]: Daemon alive
Jun 25 13:58:17 PC325 daemon-test[4445]: Daemon alive
Jun 25 13:58:18 PC325 daemon-test[4445]: Daemon alive
Jun 25 13:58:19 PC325 daemon-test[4445]: Daemon alive
Jun 25 13:58:20 PC325 daemon-test[4445]: Daemon alive
Jun 25 13:58:20 PC325 daemon-test[4445]: Daemon killed !
So I'm able to launch and kill the daemon from my C program, however I would like to improve the behaviour in some specific cases.
- Handling daemon crash
The daemon may fail at some point, in that case the control program should be notified so it can be relaunched. My problem is to detect that the daemon has been stopped.
I have though about launching a thread waiting for daemon termination by a call to pclose, however it won't work as the daemonization has already closed file descriptors and detached the process.
So I'm looking for the best way to have the program notified on daemon exit.
I could poll using linux calls with exec family (such as pgrep daemon-test or ps aux | grep daemon-test) but I think there are more efficient way to achieve that.
- Handling test program error
If the test program is killed or fails before it kills the daemon, at next execution, two instances of the daemon will run at the same time.
Test program log:
$ ./popenTest
Launching 'daemon-test' program
Program 'daemon-test' launched
Program alive !
Program alive !
Program alive !
^C
$ ./popenTest
Launching 'daemon-test' program
Program 'daemon-test' launched
Program alive !
Program alive !
Program alive !
Program alive !
Program alive !
Program 'daemon-test' killed
Syslog:
Jun 25 14:17:25 PC325 daemon-test[4543]: Daemon started !
Jun 25 14:17:25 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:26 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:27 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:28 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:29 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:29 PC325 daemon-test[4547]: Daemon started !
Jun 25 14:17:29 PC325 daemon-test[4547]: Daemon alive
Jun 25 14:17:30 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:30 PC325 daemon-test[4547]: Daemon alive
Jun 25 14:17:31 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:31 PC325 daemon-test[4547]: Daemon alive
Jun 25 14:17:32 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:32 PC325 daemon-test[4547]: Daemon alive
Jun 25 14:17:33 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:33 PC325 daemon-test[4547]: Daemon alive
Jun 25 14:17:34 PC325 daemon-test[4543]: Daemon alive
Jun 25 14:17:34 PC325 daemon-test[4547]: Daemon alive
Jun 25 14:17:34 PC325 daemon-test[4543]: Daemon killed !
Jun 25 14:17:34 PC325 daemon-test[4547]: Daemon killed !
I want to avoid this situation by checking if there are already daemon instances running. If not I can launch the daemon from control program.
Otherwise if one or several instances of the daemon are running I shall kill them before launching a new one.
This could be achieved by calling killall daemon-test but calling this command at each execution doesn't satisfy me because it's useless most of the time.
Moreover I would like to explicitly log the situation at each execution and though I want to know exactly how many instances were running in that case.
Once again this can be resolved easilly using linux command calls, but I'm looking for the most efficient way to do it.
Does anybody knows how I could implement daemon process control without having to rely on linux command calls ?
EDIT: June 26th 2018
I should have precised it from the beginning but my aim is to be able to monitor the daemon process without having to modify its code.
So the daemon doesn't write its pid to a file and is always detached from its caller.
Instead of running the program via popen why not use the good old POSIX fork + exec? It gives you a bit more flexibility.
Now, to answer you question:
My problem is to detect that the daemon has been stopped.
To do this you have to listen to SIGCHLD signal in your parent/controlling process. This is good enough since you directly invoked the process. But if you called a shell script which then forked your daemon, it would get difficult. This is why most daemons write something called a pid file - A file written by the daemon early on with it's PID as the only content in that file. Normally, people have it put it /tmp/mydaemon.pid or something like that.
On Linux, your controlling process can read the PID from this file, then every second you can test if /proc/<pid>/exe file exists. If not, you know the daemon died. For example, if your child program's PID is 1234, then /proc/1234/exe will be a soft link to the actual location of the executable of the child program.
Something like this:
FILE *f;
pid_t pid_child;
char proc_path[256];
f = fopen("/tmp/mydaemon.pid", "r");
fscanf(f, "%d", &pid_child);
fclose(f);
sprintf(proc_path, "/proc/%d/exe", pid_child);
while(1) {
if (access(proc_path, F_OK) == 0) {
printf("Program alive !\n");
sleep(1);
} else {
printf("Program dead!\n");
break;
}
}
In fact, this is roughly how many init systems are implemented. See rc, systemd, upstart etc. for a better understanding of how they implement this in more details.
You can run a socket server in daemon, then use control client as normal CLI. CLI sends test messages or control commands to daemon, and daemon gives response. Based on response from daemon, CLI can watch the status of daemon and control it.

Start Riak crashing after 30 seconds

$ riak start crashing after 30 seconds of its start. I'm having following (changes) settings in my riak.conf:
search = on
storage_backend = leveldb
riak_control = on
crash.log contains the following:
2016-06-30 14:49:38 =ERROR REPORT====
** Generic server yz_solr_proc terminating
** Last message in was {check_solr,0}
** When Server state == {state,"./data/yz",#Port<0.9441>,8093,8985}
** Reason for termination ==
** "solr didn't start in alloted time"
2016-06-30 14:49:38 =CRASH REPORT====
crasher:
initial call: yz_solr_proc:init/1
pid: <0.582.0>
registered_name: yz_solr_proc
exception exit: {"solr didn't start in alloted time",[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,744}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
ancestors: [yz_solr_sup,yz_sup,<0.578.0>]
messages: [{'EXIT',#Port<0.9441>,normal}]
links: [<0.580.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 27
reductions: 16170
neighbours:
2016-06-30 14:49:38 =SUPERVISOR REPORT====
Supervisor: {local,yz_solr_sup}
Context: child_terminated
Reason: "solr didn't start in alloted time"
Offender: [{pid,<0.582.0>},{name,yz_solr_proc},{mfargs,{yz_solr_proc,start_link,["./data/yz","./data/yz_temp",8093,8985]}},{restart_type,permanent},{shutdown,5000},{child_type,worker}]
2016-06-30 14:49:39 =ERROR REPORT====
** Generic server yz_solr_proc terminating
** Last message in was {#Port<0.12204>,{exit_status,1}}
** When Server state == {state,"./data/yz",#Port<0.12204>,8093,8985}
** Reason for termination ==
** {"solr OS process exited",1}
2016-06-30 14:49:39 =CRASH REPORT====
crasher:
initial call: yz_solr_proc:init/1
pid: <0.7631.0>
registered_name: yz_solr_proc
exception exit: {{"solr OS process exited",1},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,744}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
ancestors: [yz_solr_sup,yz_sup,<0.578.0>]
messages: [{'EXIT',#Port<0.12204>,normal}]
links: [<0.580.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 1598
stack_size: 27
reductions: 8968
neighbours:
2016-06-30 14:49:39 =SUPERVISOR REPORT====
Supervisor: {local,yz_solr_sup}
Context: child_terminated
Reason: {"solr OS process exited",1}
Offender: [{pid,<0.7631.0>},{name,yz_solr_proc},{mfargs,{yz_solr_proc,start_link,["./data/yz","./data/yz_temp",8093,8985]}},{restart_type,permanent},{shutdown,5000},{child_type,worker}]
2016-06-30 14:49:39 =SUPERVISOR REPORT====
Supervisor: {local,yz_solr_sup}
Context: shutdown
Reason: reached_max_restart_intensity
Offender: [{pid,<0.7631.0>},{name,yz_solr_proc},{mfargs,{yz_solr_proc,start_link,["./data/yz","./data/yz_temp",8093,8985]}},{restart_type,permanent},{shutdown,5000},{child_type,worker}]
2016-06-30 14:49:39 =SUPERVISOR REPORT====
Supervisor: {local,yz_sup}
Context: child_terminated
Reason: shutdown
Offender: [{pid,<0.580.0>},{name,yz_solr_sup},{mfargs,{yz_solr_sup,start_link,[]}},{restart_type,permanent},{shutdown,5000},{child_type,supervisor}]
2016-06-30 14:49:39 =SUPERVISOR REPORT====
Supervisor: {local,yz_sup}
Context: shutdown
Reason: reached_max_restart_intensity
Offender: [{pid,<0.580.0>},{name,yz_solr_sup},{mfargs,{yz_solr_sup,start_link,[]}},{restart_type,permanent},{shutdown,5000},{child_type,supervisor}]
Make sure the ports used by Solr are available. The defaults are 8093 for search, and 8985 for JMX.
Tune your system to improve performance. Follow Improving Performance for Linux.
In riak.conf, increase the JVM's heap size, the default of 1G is often not enough. For example, search.solr.jvm_options=-d64 -Xms2g -Xmx4g -XX:+UseStringCache -XX:+UseCompressedOops (see Search Settings).
On a slow machine, Solr just may take longer to start. Try increasing search.solr.start_timeout.
Solr directories must be writable (usually /var/lib/riak/data/yz*), and a compatible JVM be used.
Riak's internal solr use localhost and 127.0.0.1 as default host. So it should have defined in /etc/hosts file:
127.0.0.1 localhost
FYI, if you use windows your hosts file location could be different.

CakePHP dies instead of telling me the error

I have CakePHP. But sometimes, if I make (usually syntax) error, it doesn't tell me where and what's wrong, it just dies and I get:
Why is that, and how can I get line number and error type instead?
Debug is on. Version 2.2.3
UPDATE1:
Configure::write('Error', array(
'handler' => 'ErrorHandler::handleError',
'level' => E_ALL & ~E_DEPRECATED & ~E_STRICT,
'trace' => true
));
Configure::write('Exception', array(
'handler' => 'ErrorHandler::handleException',
'renderer' => 'ExceptionRenderer',
'log' => true
));
And errors files:
UPDATE 2:
app/tmp/error.log had problems with permissions, after I chmod -R 777 app/tmp/log/ I have the following stuff appearing:
2013-09-13 08:17:32 Error: Fatal Error (4): parse error in [/Users/petarpetrov/jobsAdvent/sunshine/app/View/Themed/Jobsearch/Users/employer_setting.ctp, line 24]
2013-09-13 08:17:32 Error: [FatalErrorException] parse error
#0 /Users/petarpetrov/jobsAdvent/sunshine/lib/Cake/Error/ErrorHandler.php(161): ErrorHandler::handleFatalError(4, 'parse error', '/Users/petarpet...', 24)
#1 [internal function]: ErrorHandler::handleError(4, 'parse error', '/Users/petarpet...', 24, Array)
#2 /Users/petarpetrov/jobsAdvent/sunshine/lib/Cake/Core/App.php(926): call_user_func('ErrorHandler::h...', 4, 'parse error', '/Users/petarpet...', 24, Array)
#3 /Users/petarpetrov/jobsAdvent/sunshine/lib/Cake/Core/App.php(899): App::_checkFatalError()
#4 [internal function]: App::shutdown()
5 {main}
/var/logs/apache2/error_log has no new lines after such error. However, I have the following things there:
[Thu Sep 12 12:43:37 2013] [notice] caught SIGTERM, shutting down
[Thu Sep 12 12:44:08 2013] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName
PHP: parse error in /private/etc/php.ini on line 1927
[Thu Sep 12 12:44:08 2013] [notice] Digest: generating secret for digest authentication ...
[Thu Sep 12 12:44:08 2013] [notice] Digest: done
[Thu Sep 12 12:44:08 2013] [notice] Apache/2.2.22 (Unix) DAV/2 PHP/5.3.15 with Suhosin-Patch mod_ssl/2.2.22 OpenSSL/0.9.8x configured -- resuming normal operations
[Thu Sep 12 12:53:55 2013] [notice] child pid 467 exit signal Segmentation fault (11)
[Thu Sep 12 12:53:55 2013] [notice] child pid 466 exit signal Segmentation fault (11)
[Thu Sep 12 13:02:14 2013] [notice] child pid 468 exit signal Segmentation fault (11)
[Thu Sep 12 13:02:33 2013] [notice] child pid 545 exit signal Segmentation fault (11)
[Thu Sep 12 16:21:26 2013] [notice] child pid 463 exit signal Segmentation fault (11)
[Thu Sep 12 16:21:28 2013] [notice] child pid 465 exit signal Segmentation fault (11)
[Fri Sep 13 10:14:50 2013] [notice] child pid 462 exit signal Segmentation fault (11)
Network Tab:
Check app/tmp/logs/error.log
Check the web server error and access logs!
Check the Network tab in chrome and check the response and request there
Or use something like Charles (http://www.charlesproxy.com/) to monitor the request and response
Check what headers the application is returning

Resources