Langfuse Security Assessment
Executive Summary
This security assessment was conducted using Kolega.dev's automated security remediation platform, which combines traditional security scanning (SAST, SCA, secrets detection) with proprietary AI-powered deep code analysis. Our two-tier detection approach identified vulnerabilities that standard tools miss, including complex logic flaws and cross-service injection vectors.
Our analysis of the Langfuse repository identified 4 vulnerabilities through Kolega.dev Deep Code Scan (Tier 2) that warrant attention.
Vulnerability Overview
ID | Title | PR/Ticket |
V1 | No Server-Side Validation for data-access-days (Data Retention) Limit | Accepted: Accepted risk by maintainer due to minimal impact on revenue |
V2 | Missing Server-Side Enforcement of organization-member-count Entitlement Limit | Accepted: Accepted risk by maintainer due to minimal impact on revenue |
V3 | PostHog Integration - SSRF Vulnerability via User-Supplied Hostname | |
V4 | Blob Storage Integration Stores Secrets Unencrypted |
Responsible Disclosure Timeline
Kolega.dev follows responsible disclosure practices. We coordinated privately through Langfuse's official security reporting channel.
23 December 2025 | Initial report sent to Langfuse by email to security@langfuse.com |
24 December 2025 | Response from Lanfuse confirming V1 and V2 are accepted risks and a fix for V3 was completed and merged in PR 11311 |
24 December 2025 | Response from Lanfuse confirming V1 and V2 are accepted risks and a fix for V3 was completed and merged in PR 11311 |
Vulnerabilities Detail
V1: No Server-Side Validation for data-access-days (Data Retention) Limit
CWE: CWE-1285 (Improper Validation of Specified Quantity in Input)
Location: web/src/components/date-range-dropdowns.tsx
Description
The data-access-days entitlement limit restricts how far back users can access historical data. This limit is only enforced at the UI level by disabling date range options. There is no server-side validation that prevents API queries from accessing data beyond the retention window.
Evidence
The limit is accessed in date-range-dropdowns.tsx to disable UI options, but no corresponding server-side validation exists in ClickHouse query endpoints. Traces API and other data access endpoints do not validate the data-access-days limit.
Impact
Users can bypass UI restrictions and make direct API calls to query historical data beyond their retention limit. For Cloud Hobby plans (30-day limit), a user could query data from 1 year ago if accessing the ClickHouse API directly.
Remediation
Add server-side validation in all data query endpoints (traces, observations, scores, etc.) to enforce the data-access-days limit. Calculate the minimum allowed timestamp based on the plan and reject queries requesting older data.
V2: Missing Server-Side Enforcement of organization-member-count Entitlement Limit
CWE: CWE-1285 (Improper Validation of Specified Quantity in Input)
Location: web/src/features/rbac/server/membersRouter.ts:111-362
Description
The organization-member-count entitlement limit is enforced only at the UI level through ActionButton disabling. The membersRouter.create() mutation does not perform server-side validation of this limit, allowing authenticated users to bypass the UI and create unlimited organization members regardless of their plan's member quota.
Evidence
Line 269-275: The membership is created without checking if organization has reached its member limit. The helper function throwIfExceedsLimit() exists in hasEntitlementLimit.ts but is never called. Client-side enforcement in CreateProjectMemberButton.tsx only disables UI buttons.
Impact
An attacker can programmatically create API calls to exceed organization member limits, potentially creating hundreds or thousands of members on a free tier plan that should be limited to 2 members.
Remediation
Add server-side validation using throwIfExceedsLimit() before creating membership. Example: await throwIfExceedsLimit({ sessionUser: ctx.session.user, orgId: input.orgId, entitlementLimit: 'organization-member-count', currentUsage: existingMemberCount });
V3: PostHog Integration - SSRF Vulnerability via User-Supplied Hostname
CWE: CWE-918 (Server-Side Request Forgery)
Location: worker/src/features/posthog/handlePostHogIntegrationProjectJob.ts:43-50
Description
The PostHog integration allows users to configure a custom PostHog hostname, which is used without SSRF validation. An attacker can configure the hostname to point to internal services, causing the worker to make requests to internal APIs, databases, or cloud metadata endpoints.
Evidence
The postHogHost field is validated only with Zod .url() check (basic syntax validation), but NO SSRF protection is applied. The webhook validation endpoint includes comprehensive IP blocking (RFC1918, loopback, cloud metadata IPs), but PostHog configuration bypasses these checks. An attacker can set: postHogHost = "http://127.0.0.1:8080" or "http://169.254.169.254" (AWS metadata) or "http://internal-db:5432".
Impact
Server-Side Request Forgery (SSRF): (1) Access to internal services (databases, APIs), (2) Cloud metadata endpoint access (AWS IAM credentials), (3) Information disclosure about internal infrastructure, (4) Potential code execution if internal services have vulnerabilities.
Remediation
Apply SSRF validation to PostHog hostname: (1) Use existing validateWebhookURL() function on postHogHost configuration, (2) Block private IP ranges, localhost, cloud metadata endpoints, (3) Validate hostname at configuration time, (4) Re-validate before making PostHog API calls, (5) Add security tests for SSRF protection.
V4: Blob Storage Integration Stores Secrets Unencrypted
CWE: CWE-312 (Cleartext Storage of Sensitive Information)
Location: web/src/pages/api/public/integrations/blob-storage/index.ts:145-162
Description
S3 credentials (secretAccessKey) are accepted from requests and stored directly in database without encryption. If the database is compromised, attackers gain full access to configured S3 buckets and can read/modify all stored data.
Evidence
secretAccessKey from request body is stored directly in blobStorageIntegration table without any encryption. Database backups, logs, and exports would contain plaintext AWS credentials.
Impact
If database is breached or accessed by internal threat actor, all S3 credentials are exposed. Attackers can: read all trace data in S3, delete evidence, modify stored files, place malware, or exfiltrate sensitive business data stored in S3.
Remediation
Encrypt secretAccessKey using application-level encryption before storage (e.g., @langfuse/shared/encryption).
Never store cleartext credentials.
Validate S3 credentials work before accepting them (test write/read).
Consider using IAM role assumption instead of access keys.
Implement key rotation mechanism.
Add audit logging for credential access.