03/24/26

How to Set Up Cron Jobs on AWS Without Infrastructure Code

Scheduled tasks without EventBridge configuration

6 Min Read

Running code on a schedule is one of the most common backend requirements. Clean up expired sessions, send digest emails, sync data between systems, archive old records. The task itself is usually 10 lines of application logic. Getting it to run on a schedule in AWS takes considerably more effort.

AWS doesn't have a standalone cron service. Scheduled tasks require stitching together EventBridge (formerly CloudWatch Events) to trigger a Lambda function or ECS task, with IAM roles to grant permissions between them and CloudWatch Logs to capture output. Each piece needs its own configuration, and they all need to reference each other correctly.

What Terraform cron jobs look like on AWS

Here's the Terraform HCL for a single scheduled task that runs a Lambda function every hour:

# IAM role for the Lambda function resource "aws_iam_role" "cleanup_lambda" { name = "cleanup-lambda-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "lambda.amazonaws.com" } }] }) } # Attach basic execution policy for CloudWatch Logs resource "aws_iam_role_policy_attachment" "lambda_logs" { role = aws_iam_role.cleanup_lambda.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" } # The Lambda function itself resource "aws_lambda_function" "cleanup" { filename = "cleanup.zip" function_name = "cleanup-expired-data" role = aws_iam_role.cleanup_lambda.arn handler = "index.handler" runtime = "nodejs20.x" timeout = 300 source_code_hash = filebase64sha256("cleanup.zip") } # EventBridge rule with the schedule resource "aws_cloudwatch_event_rule" "hourly_cleanup" { name = "hourly-cleanup" description = "Trigger cleanup every hour" schedule_expression = "rate(1 hour)" } # Connect the rule to the Lambda function resource "aws_cloudwatch_event_target" "cleanup_target" { rule = aws_cloudwatch_event_rule.hourly_cleanup.name target_id = "cleanup-lambda" arn = aws_lambda_function.cleanup.arn } # Grant EventBridge permission to invoke the function resource "aws_lambda_permission" "allow_eventbridge" { statement_id = "AllowEventBridgeInvoke" action = "lambda:InvokeFunction" function_name = aws_lambda_function.cleanup.function_name principal = "events.amazonaws.com" source_arn = aws_cloudwatch_event_rule.hourly_cleanup.arn }

That's roughly 55 lines of HCL for one cron job. The actual cleanup logic isn't even in this file. It's packaged separately as a zip, deployed separately, and you need a build step to produce it.

Add a second scheduled task and you're writing most of this again. The IAM role might be reusable, but the EventBridge rule, target, permission, and Lambda function are all per-job resources. Teams with five or six cron jobs end up with hundreds of lines of infrastructure code dedicated to running functions on a timer.

The AWS Console alternative

You can skip Terraform and configure all of this through the AWS Console. Create the Lambda function in the Lambda dashboard, write or upload your code, then navigate to EventBridge, create a new rule, pick "Schedule" as the rule type, enter a cron or rate expression, and select your Lambda as the target.

This works for a quick prototype. The problems show up over time. There's no record of what you configured or why. When someone else on the team needs to understand the schedule, they have to click through the console to find it. If you need the same setup in a staging environment, you're repeating the manual steps. And if something breaks at 3am, reconstructing what was supposed to happen from console breadcrumbs isn't a great debugging experience.

Console setup also has a subtle ordering problem. You need the Lambda to exist before you can target it from EventBridge, but you need to know the EventBridge rule ARN to set up the Lambda permission. In Terraform, the dependency graph handles this automatically. In the console, you end up going back and forth between services to wire things together.

What scheduled tasks look like with infrastructure from code

Encore takes a different approach. Instead of configuring AWS services to trigger your code, you declare the schedule alongside the code it runs:

import { CronJob } from "encore.dev/cron"; import { api } from "encore.dev/api"; const cleanup = api( { expose: false }, async () => { // delete expired sessions, archive old records, etc. } ); const _ = new CronJob("cleanup", { title: "Clean up expired data", every: "1h", endpoint: cleanup, });

When you deploy this, Encore provisions the EventBridge rule, the compute target, the IAM permissions, and the logging configuration in your AWS account. The schedule lives next to the code it runs, in the same file, in the same language.

Adding another cron job means writing another CronJob declaration. There are no new Terraform files, IAM roles, or EventBridge targets to wire up.

Where the difference compounds

A single cron job is manageable in any approach. The gap widens as your application grows.

Consider a typical backend with four scheduled tasks: session cleanup (hourly), report generation (daily), data sync from a third-party API (every 15 minutes), and monthly invoice processing. In Terraform, that's four Lambda functions, four EventBridge rules, four targets, four permissions, and probably two or three IAM roles. Somewhere around 200 lines of HCL that need to stay in sync with the application code that actually runs on those schedules.

With Encore, it's four CronJob declarations in your application code. Each one is next to the function it calls. When you change the schedule or remove a job, you change it in the same place you'd change the business logic. There's no separate infrastructure repository to keep in sync.

The schedule expressions support both simple intervals (every: "1h", every: "15m") and standard cron syntax for more specific timing. Monthly on the first at midnight, weekdays only at 9am, every 6 hours starting at an offset. The underlying EventBridge rules support the same scheduling granularity regardless of how you define them.

Observability without extra configuration

One thing that's easy to overlook in the Terraform approach is logging. The AWSLambdaBasicExecutionRole policy grants write access to CloudWatch Logs, but you still need to set up log groups, retention policies, and alarms separately. Knowing that a cron job failed at 2am requires either a CloudWatch alarm (more Terraform) or someone checking the logs manually.

Encore includes built-in tracing and logging for cron job executions. Each run appears in the trace view with timing, logs, and error status. Failed executions surface in the dashboard without extra configuration, because the observability layer is part of the platform rather than something you bolt on per-job.

When Terraform still makes sense

Terraform gives you explicit control over every resource attribute. If you need a specific Lambda memory configuration, a dead-letter queue for failed invocations, or custom retry behavior, Terraform lets you specify all of that directly. Some compliance environments require infrastructure to be reviewed as code before deployment, and Terraform's plan/apply workflow fits that process.

If your team already has a mature Terraform setup with modules for Lambda-based cron jobs, adding another scheduled task through that existing system is straightforward. The overhead is in the initial setup, not the incremental additions.

For teams that don't have that setup already, or where the developers writing the scheduled tasks are also responsible for deploying them, the infrastructure-from-code approach removes a layer of indirection. The cron schedule is application configuration, not infrastructure configuration, and treating it that way keeps it closer to the code that matters.

Ready to escape the maze of complexity?

Encore Cloud is the development platform for building robust type-safe distributed systems with declarative infrastructure.