Solution for Docker cron scheduled job not running
is Given Below:
I am trying to use a docker container based on an Alpine image to run a scheduled cron job, following this tutorial, but after printing the statement in my startup script, the container just exits, without running my other script.
My docker-compose service is configured as follows:
cron: image: alpine:3.11 command: /usr/local/startup.sh && crond -f -l 8 volumes: - ./cron_tasks_folder/1min:/etc/periodic/1min/:ro - ./cron_tasks_folder/15min:/etc/periodic/15min/:ro - ./cron_tasks_folder/hourly:/etc/periodic/hourly/:ro - ./scripts/startup.sh:/usr/local/startup.sh:ro
So it runs an initial script called
startup.sh and then starts the cron daemon. The
startup.sh script contains the following:
#!/bin/sh echo "Starting startup.sh.." echo "* * * * * run-parts /etc/periodic/1min" >> /etc/crontabs/root crontab -l sleep 300
I dropped a sleep command in there just so I could launch an interactive shell on the container and make sure everything inside it looks good. The script creates another folder for 1min scripts. I have added a test script in there, and I can verify it’s there:
/etc/periodic/1min # ls -a . .. testScript
The script is executable:
/etc/periodic/1min # ls -l testScript -rwxr-xr-x 1 root root 31 Jul 30 01:51 testScript
testScript is just an echo statement to make sure it’s working first:
echo "The donkey is in charge"
And looking at the
root file in etc/crontabs, I see the following (I’ve re-run the container several times, and each time it’s creating a new 1min folder, which is unnecessary, but I think not the problem here):
# do daily/weekly/monthly maintenance # min hour day month weekday command */15 * * * * run-parts /etc/periodic/15min 0 * * * * run-parts /etc/periodic/hourly 0 2 * * * run-parts /etc/periodic/daily 0 3 * * 6 run-parts /etc/periodic/weekly 0 5 1 * * run-parts /etc/periodic/monthly * * * * * run-parts /etc/periodic/1min * * * * * run-parts /etc/periodic/1min * * * * * run-parts /etc/periodic/1min * * * * * run-parts /etc/periodic/1min * * * * * run-parts /etc/periodic/1min
The echo statement in
testScript is never printed to my terminal, and the container exits with exit code 0 shortly after starting. I want to print this statement every minute… what am I missing?
In the docker compose file you have
command: /usr/local/startup.sh && crond -f -l 8
The intention is to run as a shell command, but it’s not at all clear from the question that’s what’s going to happen; that depends on your
ENTRYPOINT. Since it’s defined with
 brackets, not additional shell will be provided. The
command value will be passed as arguments to the
Assuming that will become a shell command,
&& in the shell runs the left hand side, and if that succeeds, then runs the right hand side. So
startup.sh needs to complete before
crond is executed.
startup.sh ends with
crond is invoked only after that 300 seconds.
In either case,
crond is either not invoked at all, or
sleep has not been completing. The comments show that an error starting
crond was discovered.
Using an entrypoint such as this is standard practice to configure the environment before, or provide runtime parameters when, invoking the main executable. To do it right, you should make sure to use
exec to run the main executable so that it receives the signals that would otherwise go to the bash shell running the entrypoint script.
So at the end of
exec crond -f -l 8
Will replace the shell running
crond, so that
crond receives all signals (at this point the shell is gone). It’s subtle but important!
In general, keep the invocation of the application as simple as possible. Case in point, your execution process was split between entrypoint, command, and startup script, with no clear interface between them. You wouldn’t have gotten hung up on the invocation if you had put
crond directly into the Dockerfile and left it at that. Sometimes arguments must be provided at runtime, but environment variables – which have names, not just positions – are often preferred. This keeps invocations simple and debugging straightforward. But, when that doesn’t work, a shell script entrypoint is a great solution – just make sure to
exec your final process!