Page timing out with 404 error when script runs longer than 120 seconds - http-status-code-404

I have a script that needs to run for up to 15 minutes through a cron job. But I can't get the script to stay running. It errors out with a 404 message at exactly 120 seconds. Very strange. I have even called my hosting provider. I am at a loss.
I have set max_execution_time = 5000
Does anyone have any suggestions?
set_time_limit(0);
$array = array('1','2', '3');
foreach ($array as $row) {
echo $row . '<br>';
sleep(50);
}

You may also need to look at Apache's timeout setting, as you can change the max execution time value in php.ini and still get stopped by Apache's timeout value in the httpd.conf file.
You can find httpd.conf by running httpd -V in terminal (The command name depends on your system/apache version, it can be also apache -V) ~ If you don't have terminal access, you may need to contact your webhost. Example output:
bash-3.2# httpd -V
Server version: Apache/2.2.22 (Unix)
Server built: Aug 24 2012 17:16:58
Server's Module Magic Number: 20051115:30
Server loaded: APR 1.4.5, APR-Util 1.3.12
Compiled using: APR 1.4.5, APR-Util 1.3.12
Architecture: 64-bit
Server MPM: Prefork
threaded: no
forked: yes (variable process count)
Server compiled with....
-D APACHE_MPM_DIR="server/mpm/prefork"
-D APR_HAS_SENDFILE
-D APR_HAS_MMAP
-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D APR_USE_FLOCK_SERIALIZE
-D APR_USE_PTHREAD_SERIALIZE
-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
-D APR_HAS_OTHER_CHILD
-D AP_HAVE_RELIABLE_PIPED_LOGS
-D DYNAMIC_MODULE_LIMIT=128
-D HTTPD_ROOT="/usr"
-D SUEXEC_BIN="/usr/bin/suexec"
-D DEFAULT_PIDLOG="/private/var/run/httpd.pid"
-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
-D DEFAULT_LOCKFILE="/private/var/run/accept.lock"
-D DEFAULT_ERRORLOG="logs/error_log"
-D AP_TYPES_CONFIG_FILE="/private/etc/apache2/mime.types"
-D SERVER_CONFIG_FILE="/private/etc/apache2/httpd.conf" <-- HERE IT IS
httpd.conf could look like this: https://www.devside.net/guides/config/linux/httpd-conf
Notice the line:
Timeout 300

Related

SQL Server Docker container immediately exiting

I am trying to run a SQL Server container on my mac through Docker.
I ran the following command:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=strongpassword" -p 1433:1433 --name sqlservercontainer -d mcr.microsoft.com/mssql/server:2019-latest
But the container is immediately exiting.
The docker logs for the container look like this:
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
/opt/mssql/bin/sqlservr: Error: The system directory [/.system] could not be created. File: LinuxDirectory.cpp:420 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
/opt/mssql/bin/sqlservr: Error: The system directory [/.system] could not be created. File: LinuxDirectory.cpp:420 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
Any idea what needs to be done to solve this?
If you use the sudo command to create a folder outside of your home directory structure for use by Docker then that folder is going to be owned by the root user, e.g.:
$ sudo mkdir /var/mssql-data
$ ls -la /var/mssql-data
total 0
drwxr-xr-x 2 root wheel 64B 26 May 11:31 ./
drwxr-xr-x 30 root wheel 960B 26 May 11:31 ../
When you try to launch an SQL Server container using a volume mapping with that folder the container will fail to start - because the Docker backend process doesn't have access - and you will see the "system directory could not be created" error message, e.g.:
$ docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=StrongPassw0rd" -p 1433:1433 -v /var/mssql-data:/var/opt/mssql --name sqlservercontainer -d mcr.microsoft.com/mssql/server:2019-latest
9d6bf76a91af08329ea07fafb67ae68410d5320d9af9db3b1bcc8387821916da
$ docker logs 9d6bf76a91af08329ea07fafb67ae68410d5320d9af9db3b1bcc8387821916da
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
/opt/mssql/bin/sqlservr: Error: The system directory [/.system] could not be created. File: LinuxDirectory.cpp:420 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
To correct the situation you need to give your own account access to the folder and then a container using that volume mapping will start successfully:
$ sudo chown $USER /var/mssql-data
$ docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=StrongPassw0rd" -p 1433:1433 -v /var/mssql-data:/var/opt/mssql --name sqlservercontainer -d mcr.microsoft.com/mssql/server:2019-latest
3b6634f234024e07af253e69f23971ab3303b3cb6b7bc286463e196dae4de82e

Docker Container exit immediately after running

I'm a Docker newbie and tried to resolve the issue after checking similar SO questions without success. So please don't mark it as a duplicate .
Issue :
The container always exits immediately after its created and running.
I have tried to run the mssql instance using command
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Technocrat123’ -p 1433:1433 -d microsoft/mssql-server-linux
when trying as similar SO link link
$ docker run -t -d --name microsoft/mssql-server-linux 0adcdf822722
got the following error ,
Unable to find image '0adcdf822722:latest' locally
docker: Error response from daemon: repository 0adcdf822722 not found: does not exist or no pull access.
when tried to kill the process referring link1
Kill: illegal process id: PID
I'm using a mac machine. Thanks in advance.
Edit :
After running the log after run command like
docker logs 0adcdf822722
it shows
This is an evaluation version. There are [160] days left in the evaluation period.
The SQL Server End-User License Agreement (EULA) must be accepted before SQL
Server can start. The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746388.
You can accept the EULA by specifying the --accept-eula command line option,
setting the ACCEPT_EULA environment variable, or using the mssql-conf tool.
But already in the run command I have set 'ACCEPT_EULA=Y'.
Your password (such as Technocrat123) doesn't meet the complexity requirements. So try adding a non-alphanumeric characters such as exclamation point (!). Secondly, use double quotes instead of single.
To check for errors, run: docker logs ID (where ID is container ID from docker ps).
This worked for me:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Sprpwd1234" --name sql_server_dev -p 1433:1433 -d store/microsoft/mssql-server-linux:2017-GA
Using (") instead of ('). Running Docker on Windows 10.
There is a typo in the command you are running:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Technocrat123’ -p 1433:1433 -d microsoft/mssql-server-linux
'Technocrat123’ should be 'Technocrat123'. The typo is in the end: ’ vs '.
The correct command is:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Technocrat123' -p 1433:1433 -d microsoft/mssql-server-linux
I was running Docker on Mac and trying to install sql-server. Initially, I was pasting the command provided here - https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker changing password. Then I would try to run the docker image. This gave me the said error "This is an evaluation version.....". I did an additional step, after running the command on the above link. I ran it again as docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Very_StrongPassword' -p 1401:1433 microsoft/mssql-server-linux:2017-latest. This kicked off installation of sql-server. This installation takes around 20-30 mins. Then the docker image is ready to use.

Unsure how to troubleshoot NRPE issue

If have distributed the puppet check for Nagios available from https://github.com/liquidat/nagios-icinga-checks/blob/master/check_puppetagent
My issue is that I get different results if I execute locally vs via NRPE:
[root#nagios-client /]# /usr/lib64/nagios/plugins/check_puppetagent
OK: Puppet was last run 17 minutes and 9 seconds ago
vs
[root#nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 192.168.50.121 -c check_puppetagent
WARN: Puppet has never run, no /opt/puppetlabs/puppet/cache/state/last_run_summary.yaml found.
Editing the file /usr/lib64/nagios/plugins/check_puppetagent and changing the line to:
summary = '/opt/puppetlabs/puppet/cache/state/last_run_summaries.yaml' on the client yields the expected result:
[root#nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 192.168.50.121 -c check_puppetagent
WARN: Puppet has never run, no /opt/puppetlabs/puppet/cache/state/last_run_summaries.yaml found.
So I know the correct file is being executed.
Executing it manually from remote works:
[root#nagios ~]# ssh 192.168.50.121 "/usr/lib64/nagios/plugins/check_puppetagent"
root#192.168.50.121's password:
OK: Puppet was last run 13 seconds ago
Antone have any ideas/suggestions what else I can do to troubleshoot?
last_run_summaries.yaml appears to only be readable by root:
https://projects.puppetlabs.com/issues/7106
When you run check_puppetagent from the command line, you're showing that you're running as root. But NRPE would likely be running check_puppetagent as the nagios user.
Try modifying your nrpe command configuration to call sudo before check_puppetagent and modify your /etc/sudoers file to give the nagios user permissions to run check_puppetagent as root.
EDIT: Also be sure to comment out the Defaults requiretty in your /etc/sudoers file.
#Defaults requiretty
nagios ALL=(ALL) NOPASSWD:/usr/lib64/nagios/plugins/check_puppetagent

Starting and populating a Postgres container in Docker

I have a Docker container that contains my Postgres database. It's using the official Postgres image which has a CMD entry that starts the server on the main thread.
I want to populate the database by running RUN psql –U postgres postgres < /dump/dump.sql before it starts listening to queries.
I don't understand how this is possible with Docker. If I place the RUN command after CMD, it will of course never be reached because Docker has finished reading the Dockerfile. But if I place it before the CMD, it will run before psql even exists as a process.
How can I prepopulate a Postgres database in Docker?
After a lot of fighting, I have found a solution ;-)
For me was very useful a comment posted here: https://registry.hub.docker.com/_/postgres/ from "justfalter"
Anyway, I have done in this way:
# Dockerfile
FROM postgres:9.4
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
db/structure.sql is a sql dump, useful to initialize the first tablespace.
Then, the init_docker_postgres.sh
#!/bin/bash
# this script is run when the docker container is built
# it imports the base database structure and create the database for the tests
DATABASE_NAME="db_name"
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
# create default database
gosu postgres postgres --single <<EOSQL
CREATE DATABASE "$DATABASE_NAME";
GRANT ALL PRIVILEGES ON DATABASE "$DATABASE_NAME" TO postgres;
EOSQL
# clean sql_dump - because I want to have a one-line command
# remove indentation
sed "s/^[ \t]*//" -i "$DB_DUMP_LOCATION"
# remove comments
sed '/^--/ d' -i "$DB_DUMP_LOCATION"
# remove new lines
sed ':a;N;$!ba;s/\n/ /g' -i "$DB_DUMP_LOCATION"
# remove other spaces
sed 's/ */ /g' -i "$DB_DUMP_LOCATION"
# remove firsts line spaces
sed 's/^ *//' -i "$DB_DUMP_LOCATION"
# append new line at the end (suggested by #Nicola Ferraro)
sed -e '$a\' -i "$DB_DUMP_LOCATION"
# import sql_dump
gosu postgres postgres --single "$DATABASE_NAME" < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
So finally:
# no postgres is running
[myserver]# psql -h 127.0.0.1 -U postgres
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
[myserver]# docker build -t custom_psql .
[myserver]# docker run -d --name custom_psql_running -p 5432:5432 custom_psql
[myserver]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4212697372 custom_psql:latest "/docker-entrypoint. 9 minutes ago Up 9 minutes 0.0.0.0:5432->5432/tcp custom_psql_running
[myserver]# psql -h 127.0.0.1 -U postgres
psql (9.2.10, server 9.4.1)
WARNING: psql version 9.2, server version 9.4.
Some psql features might not work.
Type "help" for help.
postgres=#
# postgres is now initialized with the dump
Hope it helps!
For those who want to initialize a PostgreSQL DB with millions of records during the first run.
Import using *.sql dump
You can do simple sql dump and copy the dump.sql file into /docker-entrypoint-initdb.d/. The problem is speed. My dump.sql script is about 17MB (small DB - 10 tables with 100k rows in only one of them) and the initialization takes over a minute (!). That is unacceptable for local development / unit test, etc.
Import using binary dump
The solution is to make a binary PostgreSQL dump and use shell scripts initialization support.
Then the same DB is initialized in about 500ms instead of 1 minute.
1. Create the dump.pgdata binary dump of a DB named "my-db"
directly from within a container or your local DB
pg_dump -U postgres --format custom my-db > "dump.pgdata"
Or from host from running container (postgres-container)
docker exec postgres-container pg_dump -U postgres --format custom my-db > "dump.pgdata"
2. Create a Docker image with a given dump and initialization script
$ tree
.
├── Dockerfile
└── docker-entrypoint-initdb.d
├── 01-restore.sh
├── 02-small-updates.sql
└── dump.pgdata
$ cat Dockerfile
FROM postgres:11
COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
$ cat docker-entrypoint-initdb.d/01-restore.sh
#!/bin/bash
file="/docker-entrypoint-initdb.d/dump.pgdata"
dbname=my-db
echo "Restoring DB using $file"
pg_restore -U postgres --dbname=$dbname --verbose --single-transaction < "$file" || exit 1
$ cat docker-entrypoint-initdb.d/02-small-updates.sql
-- some updates on your DB, for example for next application version
-- this file will be executed on DB during next release
UPDATE ... ;
3. Build an image and run it
$ docker build -t db-test-img .
$ docker run -it --rm --name db-test db-test-img
Alternatively, you can just mount a volume to /docker-entrypoint-initdb.d/ that contains all your DDL scripts. You can put in *.sh, *.sql, or *.sql.gz files and it will take care of executing those on start-up.
e.g. (assuming you have your scripts in /tmp/my_scripts)
docker run -v /tmp/my_scripts:/docker-entrypoint-initdb.d postgres
There is yet another option available that utilises Flocker:
Flocker is a container data volume manager that is designed to allow databases like PostgreSQL to easily run in containers in production. When running a database in production, you have to think about things like recovering from host failure. Flocker provides tools for managing data volumes across a cluster of machines like you have in a production environment. For example, as a Postgres container is scheduled between hosts in response to server failure, Flocker can automatically move its associated data volume between hosts at the same time. This means that when your Postgres container starts up on a new host, it has its data. This operation can be accomplished manually using the Flocker API or CLI, or automatically by a container orchestration tool that Flocker is integrates with, for example Docker Swarm, Kubernetes or Mesos.
I Followed the same solution which #damoiser , The only situation which was different was I wanted to import all dump data.
Please follow the solution below.(I have not done any kind of checks)
Dockerfile
FROM postgres:9.5
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
then the init_docker_postgres.sh script
#!/bin/bash
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
psql -U postgres < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
and then you can build your image as
docker build -t abhije***/postgres-data .
docker run -d abhije***/postgres-data
My solution is inspired by Alex Dguez's answer which unfortunately doesn't work for me because:
I used pg-9.6 base image, and the RUN /docker-entrypoint.sh --help never ran through for me, which always complained with The command '/bin/sh -c /docker-entrypoint.sh -' returned a non-zero code: 1
I don't want to pollute the /docker-entrypoint-initdb.d dir
The following answer is originally from my reply in another post: https://stackoverflow.com/a/59303962/4440427. It should be noted that the solution is for restoring from a binary dump instead of from a plain SQL as asked by the OP. But it can be modified slightly to adapt to the plain SQL case
Dockerfile:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
I was able to load the data in by pre-pending the run command in the docker file with /etc/init.d/postgresql. My docker file has the following line which is working for me:
RUN /etc/init.d/postgresql start && /usr/bin/psql -a < /tmp/dump.sql
We for E2E test in which we need a database with structure and data already saved in the Docker image we have done the following:
Dockerfile:
FROM postgres:9.4.24-alpine
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV PGDATA /pgdata
COPY database.backup /tmp/
COPY database_restore.sh /docker-entrypoint-initdb.d/
RUN /docker-entrypoint.sh --help
RUN rm -rf /docker-entrypoint-initdb.d/database_restore.sh
RUN rm -rf /tmp/database.backup
database_restore.sh:
#!/bin/sh
set -e
pg_restore -C -d postgres /tmp/database.backup
To create the image:
docker build .
To start the container:
docker run --name docker-postgres -d -p 5432:5432 <Id-docker-image>
This does not restore the database every time the container is booted. The structure and data of the database is already contained in the created Docker image.
We have based on this article, but eliminating the multistage:
Creating Fast, Lightweight Testing Databases in Docker
Edit: With version 9.4-alpine does not work now because it does not
run the database_restore.sh scrips. Use version 9.4.24-alpine
My goal was to have an image that contains the database - i. e. saving the time to rebuild it everytime I do docker run oder docker-compose up.
We would just have to manage to get the line exec "$#" out of docker-entrypoint.sh. So I added into my Dockerfile:
#Copy my ssql scripts into the image to /docker-entrypoint-initdb.d:
COPY ./init_db /docker-entrypoint-initdb.d
#init db
RUN grep -v 'exec "$#"' /usr/local/bin/docker-entrypoint.sh > /tmp/docker-entrypoint-without-serverstart.sh && \
chmod a+x /tmp/docker-entrypoint-without-serverstart.sh && \
/tmp/docker-entrypoint-without-serverstart.sh postgres && \
rm -rf /docker-entrypoint-initdb.d/* /tmp/docker-entrypoint-without-serverstart.sh

Mysqldump connecting issue

I'm trying to make dump with next command:
mysqldump -v -u root -p -h 127.0.0.1 -P 3308 -x --add-drop-table
--add-locks --create-options -K -e -q -A > database.sql
The result (after password input) is message "Connecting to 127.0.0.1...". After this is nothing (no any errors, just waiting).
database.sql is empty file.
Why I see no any activity? Is it bug?
From http://linuxcommand.org/man_pages/mysqldump1.html
The password to use when connecting to the server. If you use the
short option form (-p), you cannot have a space between the option and
the password. If you omit the password value following the --password
or -p option on the command line, you are prompted for one.
The system may be waiting for you to input a password.
If you want to avoid that just add the password in the command. Assuming your password is "FLOWER":
mysqldump -v -u root -pFLOWER -h 127.0.0.1 -P 3308 -x --add-drop-table --add-locks --create-options -K -e -q -A > database.sql
This problem, as you describe it, can be caused by the mysql server not running or not being available on the host (in your case, localhost), or it is running but not on that port.
What kind of a system is it? If it is a flavor of linux/unix, you can run
ps -ef|egrep mysql
to see if the mysql server is running. Check the equivalent command on Windows or whatever else you may be running. Also, you can verify that this is the problem by seeing if this works:
mysql -u root -p -h 127.0.0.1 -P 3308
The solution is to start the server:
/etc/init.d/mysqld start
or the equivalent on your system.
Note: if it is running, determine what port it is on - it is possible that you are not specifying the right port number. The default is 3306 - it is unusual that you are using a non-standard port.

Resources