Task Cache

Overview

High-performance Redis cache for frequently accessed tasks, reducing database load and improving response times for task queries.

Caching Strategy

Task Data Caching

  • Hot Tasks: Recently viewed/edited tasks (5-minute TTL)
  • Task Details: Complete task objects (10-minute TTL)
  • Task Lists: Board/sprint task lists (2-minute TTL)
  • Task Counts: Status counts per workspace (5-minute TTL)

Query Result Caching

  • Search Results: Recent search queries (30-second TTL)
  • Filter Results: Common filter combinations (1-minute TTL)
  • Assigned Tasks: Per-user task lists (2-minute TTL)

Technical Details

Specifications:

  • Version: Redis 7.x
  • Memory: 64 GB
  • Persistence: RDB snapshots (no AOF for cache)
  • Replication: Primary with 2 read replicas
  • Clustering: Redis Cluster with 6 shards

Performance:

  • Latency: P99 < 1ms
  • Throughput: 200,000 ops/second
  • Hit Rate: Target 85%+
  • Eviction Policy: LRU (Least Recently Used)
  • Connections: Max 5,000

Data Structures

Task Cache Keys

task:{task_id} -> Task object (hash)
- id
- workspace_id
- title
- description
- status
- priority
- assignees (JSON array)
- labels (JSON array)
- due_date
- created_at
- updated_at

Task List Cache

tasks:workspace:{workspace_id}:board:{board_id} -> Task IDs (sorted set)
Score: task position/rank
TTL: 2 minutes

Task Counters

tasks:workspace:{workspace_id}:counts -> Status counts (hash)
- todo: 42
- in_progress: 15
- done: 128
TTL: 5 minutes

User Task Cache

tasks:user:{user_id}:assigned -> Assigned task IDs (set)
TTL: 2 minutes

Cache Invalidation

🔄 Cache Invalidation

Cache invalidation happens on task updates, assignments, or status changes.

  1. Write-Through

    Updates write to DB first, then invalidate cache

  2. Event-Based Invalidation

    Kafka events trigger targeted cache invalidation

  3. Bulk Invalidation

    Board/sprint moves invalidate all related caches

  4. TTL Safety Net

    All cache entries have TTL for eventual consistency

Cache Patterns

Cache-Aside (Lazy Loading)

func GetTask(taskID string) (*Task, error) {
// Try cache first
task, err := redis.Get("task:" + taskID)
if err == nil {
return task, nil
}
// Cache miss - load from DB
task, err = db.GetTask(taskID)
if err != nil {
return nil, err
}
// Store in cache
redis.Set("task:" + taskID, task, 10*time.Minute)
return task, nil
}

Write-Through

func UpdateTask(task *Task) error {
// Update database
err := db.UpdateTask(task)
if err != nil {
return err
}
// Invalidate cache
redis.Del("task:" + task.ID)
// Invalidate related caches
redis.Del("tasks:user:" + task.AssignedTo + ":assigned")
return nil
}

Memory Management

  • Max Memory: 64 GB
  • Eviction Policy: allkeys-lru
  • Memory Fragmentation: < 1.5 ratio
  • Key Expiration: Active + passive expiration
  • Lazy Freeing: Async deletion of large keys

Monitoring

Key Metrics

  • Hit Rate: Monitor cache effectiveness
  • Eviction Rate: Track memory pressure
  • Memory Usage: Alert at 80% capacity
  • Latency: P99 latency < 2ms
  • Connection Utilization: Track active connections

Alerts

  • ⚠️ Hit rate < 80%
  • ⚠️ Memory usage > 85%
  • ⚠️ Eviction rate > 1000/min
  • 🚨 Primary node down
  • 🚨 Replication lag > 10 seconds

Performance Optimizations

  • Pipelining: Batch multiple commands
  • Connection Pooling: Reuse connections
  • Compression: Large values compressed
  • Key Naming: Efficient namespace design
  • Read Replicas: Distribute read load

Security

  • Authentication: Redis AUTH enabled
  • TLS: Encrypted connections
  • Network: Private VPC only
  • ACLs: Command restrictions per client