fix(CQ-API-5): Document throttler in-memory fallback as best-effort
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
Add comprehensive JSDoc and inline comments documenting the known race condition in the in-memory fallback path of ThrottlerValkeyStorageService. The non-atomic read-modify-write in incrementMemory() is intentionally left without a mutex because: - It is only the fallback path when Valkey is unavailable - The primary Valkey path uses atomic INCR and is race-free - Adding locking to a rarely-used degraded path adds complexity with minimal benefit Also adds Logger.warn calls when falling back to in-memory mode at runtime (Redis command failures). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -16,11 +16,18 @@ interface ThrottlerStorageRecord {
|
||||
/**
|
||||
* Redis-based storage for rate limiting using Valkey
|
||||
*
|
||||
* This service uses Valkey (Redis-compatible) as the storage backend
|
||||
* for rate limiting. This allows rate limits to work across multiple
|
||||
* API instances in a distributed environment.
|
||||
* This service uses Valkey (Redis-compatible) as the primary storage backend
|
||||
* for rate limiting, which provides atomic operations and allows rate limits
|
||||
* to work correctly across multiple API instances in a distributed environment.
|
||||
*
|
||||
* If Redis is unavailable, falls back to in-memory storage.
|
||||
* **Fallback behavior:** If Valkey is unavailable (connection failure or command
|
||||
* error), the service falls back to in-memory storage. The in-memory mode is
|
||||
* **best-effort only** — it uses a non-atomic read-modify-write pattern that may
|
||||
* allow slightly more requests than the configured limit under high concurrency.
|
||||
* This is an acceptable trade-off because the fallback path is only used when
|
||||
* the primary distributed store is down, and adding mutex/locking complexity for
|
||||
* a degraded-mode code path provides minimal benefit. In-memory rate limits are
|
||||
* also not shared across API instances.
|
||||
*/
|
||||
@Injectable()
|
||||
export class ThrottlerValkeyStorageService implements ThrottlerStorage, OnModuleInit {
|
||||
@@ -95,7 +102,10 @@ export class ThrottlerValkeyStorageService implements ThrottlerStorage, OnModule
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
this.logger.error(`Redis increment failed: ${errorMessage}`);
|
||||
// Fall through to in-memory
|
||||
this.logger.warn(
|
||||
"Falling back to in-memory rate limiting for this request. " +
|
||||
"In-memory mode is best-effort and may be slightly permissive under high concurrency."
|
||||
);
|
||||
totalHits = this.incrementMemory(throttleKey, ttl);
|
||||
}
|
||||
} else {
|
||||
@@ -129,7 +139,10 @@ export class ThrottlerValkeyStorageService implements ThrottlerStorage, OnModule
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
this.logger.error(`Redis get failed: ${errorMessage}`);
|
||||
// Fall through to in-memory
|
||||
this.logger.warn(
|
||||
"Falling back to in-memory rate limiting for this request. " +
|
||||
"In-memory mode is best-effort and may be slightly permissive under high concurrency."
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -138,7 +151,26 @@ export class ThrottlerValkeyStorageService implements ThrottlerStorage, OnModule
|
||||
}
|
||||
|
||||
/**
|
||||
* In-memory increment implementation
|
||||
* In-memory increment implementation (best-effort rate limiting).
|
||||
*
|
||||
* **Race condition note:** This method uses a non-atomic read-modify-write
|
||||
* pattern (read from Map -> filter -> push -> write to Map). Under high
|
||||
* concurrency, multiple async operations could read the same snapshot of
|
||||
* timestamps before any of them write back, causing some increments to be
|
||||
* lost. This means the rate limiter may allow slightly more requests than
|
||||
* the configured limit.
|
||||
*
|
||||
* This is intentionally left without a mutex/lock because:
|
||||
* 1. This is the **fallback** path, only used when Valkey is unavailable.
|
||||
* 2. The primary Valkey path uses atomic INCR operations and is race-free.
|
||||
* 3. Adding locking complexity to a rarely-used degraded code path provides
|
||||
* minimal benefit while increasing maintenance burden.
|
||||
* 4. In degraded mode, "slightly permissive" rate limiting is preferable
|
||||
* to added latency or deadlock risk from synchronization primitives.
|
||||
*
|
||||
* @param key - The throttle key to increment
|
||||
* @param ttl - Time-to-live in milliseconds for the sliding window
|
||||
* @returns The current hit count (may be slightly undercounted under concurrency)
|
||||
*/
|
||||
private incrementMemory(key: string, ttl: number): number {
|
||||
const now = Date.now();
|
||||
@@ -150,7 +182,8 @@ export class ThrottlerValkeyStorageService implements ThrottlerStorage, OnModule
|
||||
// Add new timestamp
|
||||
validTimestamps.push(now);
|
||||
|
||||
// Store updated timestamps
|
||||
// NOTE: Non-atomic write — concurrent calls may overwrite each other's updates.
|
||||
// See method JSDoc for why this is acceptable in the fallback path.
|
||||
this.fallbackStorage.set(key, validTimestamps);
|
||||
|
||||
return validTimestamps.length;
|
||||
|
||||
Reference in New Issue
Block a user