Skip to content

AppConfig

use horsies::{
Horsies, AppConfig,
};
let config = AppConfig::for_database_url("postgresql://user:pass@localhost:5432/mydb");
let mut app = Horsies::new(config)?;
FieldTypeDefaultDescription
queue_modeQueueModeDefaultQueue operating mode
custom_queuesOption<Vec<CustomQueueConfig>>NoneQueue definitions (Custom mode only)
brokerPostgresConfigrequiredDatabase connection settings
cluster_wide_capOption<u32>NoneMax in-flight tasks cluster-wide
prefetch_bufferu3200 = hard cap mode, >0 = soft cap with prefetch
claim_lease_msOption<u32>NoneClaim lease duration (required if prefetch_buffer > 0; optional override in hard cap mode)
max_claim_renew_age_msu32180000Max age (ms) of a CLAIMED task that heartbeat will renew. Older claims are left to expire, preventing indefinite renewal of orphaned tasks. Must be >= effective claim lease
recoveryRecoveryConfigdefaultsCrash recovery settings
resilienceWorkerResilienceConfigdefaultsWorker retry/backoff and notify polling
scheduleOption<ScheduleConfig>NoneScheduled task configuration
resend_on_transient_errboolfalseAuto-retry transient enqueue/start failures for task sends, scheduled sends, and workflow starts
let config = AppConfig {
queue_mode: QueueMode::Default,
custom_queues: None,
..AppConfig::for_database_url("postgresql://...")
};
use horsies::CustomQueueConfig;
let config = AppConfig {
queue_mode: QueueMode::Custom,
custom_queues: Some(vec![
CustomQueueConfig { name: "high".into(), priority: 1, max_concurrency: 10 },
CustomQueueConfig { name: "low".into(), priority: 100, max_concurrency: 3 },
]),
..AppConfig::for_database_url("postgresql://...")
};

See Queue Modes for details.

Limit total in-flight tasks across all workers:

let config = AppConfig {
cluster_wide_cap: Some(100), // Max 100 in-flight tasks (RUNNING + CLAIMED)
..AppConfig::for_database_url("postgresql://...")
};

Set to None (default) for unlimited.

Note: When cluster_wide_cap is set, the system operates in hard cap mode (counts RUNNING + CLAIMED tasks). This ensures strict enforcement and fair distribution across workers.

Control whether workers can prefetch tasks beyond their running capacity:

// Hard cap mode (default) - strict enforcement, fair distribution
let config = AppConfig {
prefetch_buffer: 0, // No prefetch, workers only claim what they can run
..AppConfig::for_database_url("postgresql://...")
};
// Soft cap mode - allows prefetch with lease expiry
let config = AppConfig {
prefetch_buffer: 4, // Prefetch up to 4 extra tasks per worker
claim_lease_ms: Some(5000), // Prefetched claims expire after 5 seconds
..AppConfig::for_database_url("postgresql://...")
};

Important: cluster_wide_cap cannot be used with prefetch_buffer > 0. If you need a global cap, hard cap mode is required.

See Concurrency for detailed explanation of hard vs soft cap modes.

Override crash recovery defaults:

use horsies::RecoveryConfig;
let config = AppConfig {
recovery: RecoveryConfig {
auto_requeue_stale_claimed: true,
claimed_stale_threshold_ms: 120_000, // 2 minutes
auto_fail_stale_running: true,
running_stale_threshold_ms: 300_000, // 5 minutes
..RecoveryConfig::default()
},
..AppConfig::for_database_url("postgresql://...")
};

See Recovery Config for all options.

Configure worker retry/backoff and NOTIFY fallback polling:

use horsies::WorkerResilienceConfig;
let config = AppConfig {
resilience: WorkerResilienceConfig {
db_retry_initial_ms: 500,
db_retry_max_ms: 30_000,
db_retry_max_attempts: 0, // 0 = infinite
notify_poll_interval_ms: 5_000,
},
..AppConfig::for_database_url("postgresql://...")
};

Enable scheduled tasks:

use horsies::{ScheduleConfig, TaskSchedule, SchedulePattern, DailySchedule};
use chrono::NaiveTime;
let config = AppConfig {
schedule: Some(ScheduleConfig::new(vec![
TaskSchedule::new(
"daily-cleanup",
"cleanup_old_data",
SchedulePattern::Daily(DailySchedule {
time: NaiveTime::from_hms_opt(3, 0, 0).unwrap(),
}),
),
])),
..AppConfig::for_database_url("postgresql://...")
};

See Scheduler Overview for details.

AppConfig is validated when you construct Horsies or run app.check():

  • CUSTOM mode requires non-empty custom_queues
  • DEFAULT mode must not have custom_queues
  • Queue names must be unique
  • cluster_wide_cap must be positive if set
  • prefetch_buffer must be non-negative
  • claim_lease_ms is required when prefetch_buffer > 0
  • claim_lease_ms is optional when prefetch_buffer = 0 (overrides the default 60s lease)
  • cluster_wide_cap cannot be combined with prefetch_buffer > 0

Multiple validation errors within the same phase are collected and reported together (compiler-style), rather than stopping at the first error.

Use app.check() to run static validation before starting a worker or scheduler. Use app.check_live() to additionally connect to PostgreSQL, ensure the schema, and verify broker connectivity.

// Static validation — returns Result<(), HorsiesError>
app.check()?;
// Static + broker connectivity check
app.check_live().await?;

Phases:

PhaseWhat it validatesGating
1. ConfigAppConfig, RecoveryConfig, ScheduleConfig consistencyValidated at construction (implicit pass)
2. Task registryAll registered tasks have valid names and queue assignmentsErrors stop progression to later phases
3. WorkflowsWorkflowSpec DAG validation (cycles, missing deps, type mismatches)Collected alongside Phase 2
4. Broker (if check_live)Connects to PostgreSQL, ensures the Horsies schema, and runs SELECT 1Only runs if earlier phases pass

Returns Ok(()) when all validations pass, or Err(HorsiesError) with diagnostic messages.

CLI equivalent:

Terminal window
horsies check ./config/horsies.toml # Static validation
horsies check ./config/horsies.toml --live # Static + broker connectivity

Log the configuration (with masked password):

use tracing::info;
info!("AppConfig:\n{}", config.format_for_logging());

Output:

AppConfig:
queue_mode: DEFAULT
broker:
database_url: postgresql://user:***@localhost/db
pool_size: 30
max_overflow: 30
recovery:
auto_requeue_stale_claimed: true
...