Skip to main content

Worker configuration

This section explains the in-depth configuration of workers.

info

Updating the configuration of a worker in Taskurai will trigger the start of new instances of your container and stopping of old instances.

Each running container will receive a terminate signal (see CancellationToken in the command) to shut down gracefully.

Worker deployment configuration

A worker is deployed using a configuration section in the solution yaml.

Reference worker configuration

This is a reference worker configuration sample:

workers:
TestWorker:
container:
imageName: taskurai-worker-sample
image: mycontainerregistry.azurecr.io/taskurai-worker-sample:latest
resourceAllocation: Cpu_0_25_Memory_0_5Gi
server: clientcontainerregistry.azurecr.io
userName: containertokenfortaskuraipaying
passwordSecretReference: containerregistrypassword
scaling:
scaleOutTaskCount: 100
minInstances: 0
maxInstances: 2
secrets:
- name: containerregistrypassword
secretReference: containerregistrypassword
environment:
- name: EnviromentValue
value: test123
commands:
- testAction
- testAction2
cleanupCompletedTasksAfterSeconds: 2592000

Worker configuration

PropertyTypeRequiredDescription
namestringNoOverride worker name (default: worker key)
containerContainerYesContainer configuration
scalingScalingYesScaling configuration
secretsList of secretsNoList of secrets
environmentList of environment variablesNoList of environment variables
commandsList of commands (string)NoList of commands
cleanupCompletedTasksAfterSecondsintegerNoCleanup completed tasks after x seconds

Container image configuration

PropertyTypeRequiredDescription
imageNamestringYesContainer image name
imagestringYesContainer image
serverstringNoThe container registry server
userNamestringNoContainer Registry Username
passwordSecretReferencestringNoA secret should be made within the worker and referenced here

Taskurai supports:

  • Any Linux-based x86-64 (linux/amd64) container image
  • Containers from any public or private container registry

Features include:

  • There's no required base container image.
  • If a container crashes, it automatically restarts.

When using a container image from a private container registry, the following settings are required:

  • server
  • userName
  • passwordSecretReference
tip

When you want to put a new container in usage, you need to:

  • Update the image when the tag has changed.
  • Deploy the worker again.

Worker size configuration

Each worker can be configured to a size as needed. The following settings are supported:

Worker sizeDescription
Cpu_0_25_Memory_0_5Gi0.25 vCPU cores / 0.5Gi Memory
Cpu_0_50_Memory_1_0Gi0.5 vCPU cores / 1Gi Memory
Cpu_0_75_Memory_1_5Gi0.75 vCPU cores / 1.5Gi Memory
Cpu_1_00_Memory_2_0Gi1 vCPU core / 2Gi Memory
Cpu_1_25_Memory_2_5Gi1.25 vCPU cores / 2.5Gi Memory
Cpu_1_50_Memory_3_0Gi1.5 vCPU cores / 3Gi Memory
Cpu_1_75_Memory_3_5Gi1.75 vCPU cores / 3.5Gi Memory
Cpu_2_00_Memory_4_0Gi2 vCPU cores / 4Gi Memory

Scaling configuration

Taskurai manages automatic horizontal scaling of workers, depending on the number of tasks waiting for a worker.

When a worker scales, a new instance of that worker is created. When a worker instance is no longer needed, it is automatically stopped.

You can configure the scaling behavior of a worker using these settings:

PropertyTypeRequiredDescription
scaleOutTaskCountintegerYesThe number of jobs waiting for the worker. Each scaleOutTaskCount delta will engage a new instance.
minInstancesintegerYesThe minimum number of instances (0-400)
maxInstancesintegerYesThe maximum number of instances (1-400)

Keep the following items in mind:

  • Taskurai billing works with the maximum number of instances (the total size of the configuration) defined in one month.
  • The Azure resources created are charged like this:
    • You aren't billed usage charges for the containers if your container app scales to zero.
    • Instances that are running idle but remain in memory are billed at a lower "idle" rate.
    • If you want to ensure that an instance of a worker is always running (for better response times), set the minimum number of instances to 1 or higher.
caution

Consider the use case of your commands and whether concurrent execution of tasks is supported.

In the case your worker commands are calling services that are rate-limited, for example mailing services, consider the correct scaling setting for each worker.

Secret configuration

Each worker can contain secrets to be used in environment variables and the container registry password.

Secret properties:

PropertyTypeRequiredDescription
namestringYesSecret name
secretReferencestringNoGlobal secret reference (recommended)
valuestringNoSecret value (not recommended)

While it is technically possible to define a secret value directly in the worker configuration, it is highly recommended to use globally defined secrets in Taskurai and reference them in the worker.

options:
...
secrets:
- myglobalcontainerregistrypassword
- myglobalsecret
workers:
TestWorker:
container:
...
passwordSecretReference: containerregistrypassword
...
secrets:
- name: containerregistrypassword
secretReference: myglobalcontainerregistrypassword
- name: mycontainersecret
secretReference: myglobalsecret
environment:
- name: ENV_WITH_SECRET
secretReference: mycontainersecret
- name: ENV_WITH_VALUE
value: myvalue
...

Notice that the global Taskurai secret can only be referenced in the container secret configuration. To use secrets within the container itself, the local secrets defined in the worker should be used.

Environment configuration

Environment variables can be set to be used in containers.

Environment variable properties:

PropertyTypeRequiredDescription
namestringYesVariable name
secretReferencestringNoReference to a worker secret
valuestringNoVariable value

An environment variable can either be a plain text value or can reference a container secret.

options:
...
secrets:
- myglobalcontainerregistrypassword
- myglobalsecret
workers:
TestWorker:
...
secrets:
...
- name: mycontainersecret
secretReference: myglobalsecret
environment:
- name: ENV_WITH_SECRET
secretReference: mycontainersecret
- name: ENV_WITH_VALUE
value: myvalue
...

Command configuration

Each worker should have at least one command. A worker name must be unique in Taskurai and not be shared in different workers.

Command names are case-insensitive and should match the workers setup:

workers:
TestWorker:
...
commands:
- testAction
- testAction2
...
tip

Depending on your scenario, a worker can contain one or more commands.

It is recommended to balance the following principles:

  • Single responsibility: Keep workers dedicated to a single task, separating concerns. When a command fails, the impact on the whole system is limited.
  • Keep it simple: Workers should be designed to be simple to set up and easy to maintain.
  • Fit for purpose: Keep workers small and fast for short-running tasks that are in high demand (sending emails, generating invoices, etc.). Long-running jobs (aggregating data, etc.) can be beefier.
  • Shared resources: In some cases, it can be useful for a worker to contain multiple commands that share similar purposes, like sending notification emails of all kinds.

In short, don't design a new monolithic worker. Taskurai is designed to handle many workers at scale.

Cleanup configuration

For each worker, you can optionally configure the cleanup of completed tasks.

The configuration is done in seconds. Omit the setting to keep tasks forever:

workers:
TestWorker:
...
cleanupCompletedTasksAfterSeconds: 2592000
...

Worker runtime configuration

Each worker can be configured using the appsettings.json file:

appsettings.json
{
...
"Taskurai": {
"IsolationMode": false,
"WorkerName": "TestWorker",
"VisibilityTimeout": 300,
"MaxConcurrentTasks": 1
}
...
}

For local development, you should use the Development configuration file:

appsettings.Development.json
{
...
"Taskurai": {
"IsolationMode": true,
"WorkerName": "TestWorker",
"VisibilityTimeout": 300,
"MaxConcurrentTasks": 1
}
...
}

The following settings can be configured:

PropertyTypeRequiredDefaultDescription
IsolationModebooleanNofalseRun the worker in isolation mode. This is recommended when developing locally
WorkerNamestringYesThe name of the worker
VisibilityTimeoutintegerNo300The time in seconds the task is invisible to other instances of the worker (default 300, max 7 days)
QueueReaderIntervalintegerNo-1The interval to retrieve new tasks from the queue (milliseconds), -1 for no delay (default -1). The delay happens between the time new tasks can be found and the next lookup operation. While there are still tasks available, this QueueReaderInterval delay is used
QueueReaderIdleIntervalintegerNo1000The interval to retrieve new tasks from the queue (milliseconds), -1 for no delay (default 1000 msec). The delay happens between the time no more new tasks can be found and the next lookup operation. While there are no new tasks available, the QueueReaderIdleInterval delay is used
MaxConcurrentTasksintegerNo1The maximum number of concurrent tasks (default = 1)
warning

Handling Visibility Timeouts With Care

Visibility timeouts are crucial for managing long-running tasks. To ensure the system correctly interprets a task's status, it's essential to regularly extend the visibility timeout (See Visibility Timeout). This acts as a heartbeat signal, indicating the task is still active.

If a task's visibility timeout is exceeded without extension, the task will become available again for processing, potentially leading to concurrent execution.

Load leveling

Queue-based load leveling uses a buffer between tasks and services to smooth heavy workloads. This is especially beneficial for external downstream services vulnerable to failures during peaks or subject to rate limits.

It helps prevent additional costs by managing occasional spikes without requiring pricier plans.

The approach ensures system reliability, cost-effectiveness, and responsiveness to user demands.

How Taskurai provides load leveling

Taskurai implements load leveling on the level of a worker. Commands that have different load leveling requirements, should be organized in different workers.

Load leveling should be configured using a combination of:

  • Maximum worker instances
  • Maximum concurrent tasks per worker
  • Scale out task count
  • Queue reader interval (both in active and idle mode)

Load leveling configuration

The following example configures a worker to handle a rate limit of a downstream service of maximum 2 request / second (120 RPM), handled by maximum two workers.

info

The values is somewhat arbitrary and should be fine-tuned based on actual workload patterns, the processing time of tasks, and the behavior of the downstream service's rate limiting. Monitoring and adjusting based on real-world performance is crucial for optimal configuration.

Configure the number of maximum worker instances and queue reader intervals using the appsettings.json file:

appsettings.json
{
...
"Taskurai": {
...
"MaxConcurrentTasks": 1, // Only allow one concurrent task
"QueueReaderInterval": 1000, // Only poll once a second for a new task (when actively receiving tasks from the queue)
"QueueReaderIdleInterval": 1000, // Only poll once a second for a new task (in idle mode, when no tasks are scheduled in the last polling request)
...
}
...
}

Next, configure the deployment configuration and specify the maximum number of instances for a worker:

workers:
TestWorker:
...
scaling:
scaleOutTaskCount: 15 # Scale out from 1 -> 2 instances when more than 15 tasks are queued
minInstances: 1 # Minimum one worker instance
maxInstances: 2 # Maximum two worker instances
...