Tweet not found
The embedded tweet could not be foundβ¦

Langfuse v3 is now stable and ready for production use when self-hosting Langfuse, including many scalability and architectural improvements.
This is the biggest Langfuse release since we initially launched early last year. Thank you to everyone who contributed to the release and provided feedback via GitHub Discussions (v3 thread)!
If you use Langfuse Cloud, you'll mainly notice improved performance and reliability. You've already been using many v3 features over recent months. These architectural improvements help Langfuse scale effectively and enable new analytical features for processing large volumes of production data.
If you self-host Langfuse, a whole lot is changing. With Langfuse v3, Langfuse is getting a new architecture that is optimized for scalability, reliability, and performance. Read on to learn more about the new architecture and the new features.
Langfuse has gained significant traction over the last months, both in our Cloud environment and in self-hosted setups. With Langfuse v3 we introduce changes that allow our backend to handle hundreds of events per second with higher reliability. To achieve this scale, we introduce a second Langfuse container and additional storage services like S3/Blob store, Clickhouse, and Redis which are better suited for the required workloads than our previous Postgres-based setup.
In short, Langfuse v3 adds:
Architecture Diagram
Langfuse consists of two application containers, storage components, and an optional LLM API/Gateway.
Langfuse can be deployed within a VPC or on-premises in high-security environments. Internet access is optional. See networking documentation for more details.
Architecture Diagram
Since running this infrastructure on Langfuse Cloud, we have observed a significant improvement in reliability and performance. These are some of the largest benefits:
Some of the more recent launches were dependent on the new architecture and thus only available on Langfuse Cloud for now. This changes today with v3.0.0. We do not plan to have a feature gap when self-hosting Langfuse (OSS, Pro, Enterprise).
Learn more about the v2 to v3 evolution and architectural decisions in our technical blog post.
We made the strategic decision to migrate our traces, observations, and scores table from Postgres to Clickhouse. Both us and our self-hosters observed bottlenecks in Postgres when dealing with millions of rows of tracing data, both on ingestion and retrieval of information. Our core requirement was a database that could handle massive volumes of trace and event data with exceptional query speed and efficiency while also being available for free to self-hosters.
Limitations of Postgres
Initially, Postgres was an excellent choice due to its robustness, flexibility, and the extensive tooling available. As our platform grew, we encountered performance bottlenecks with complex aggregations and time-series data. The row-based storage model of PostgreSQL becomes increasingly inefficient when dealing with billions of rows of tracing data, leading to slow query times and high resource consumption.
Our requirements
Why Clickhouse is great
When talking to other companies and looking at their code bases, we learned that Clickhouse is a popular choice these days for analytical workloads. Many modern observability tools, such as Signoz or Posthog, as well as established companies like Cloudflare, use Clickhouse for their analytical workloads.
Clickhouse vs. others
We think there are many great OLAP databases out there and are sure that we could have chosen an alternative and would also succeed with it. However, here are some thoughts on alternatives:
Building an adapter and support multiple databases
We explored building a multi-database adapter to support Postgres for smaller self-hosted deployments. After talking to engineers and reviewing some of PostHog's Clickhouse implementation, we decided against this path due to its complexity and maintenance overhead. This allows us to focus our resources on building user features instead.
We added a Redis instance to serve cache and queue use-cases within our stack. With its open source license, broad native support my major cloud vendors, and ubiquity in the industry, Redis was a natural choice for us.
Observability data for LLM application tends to contain large, semi-structured bodies of data to represent inputs and outputs. We chose S3/Blob Store as a scalable, secure, and cost-effective solution to store these large objects. It allows us to store all incoming events for further processing and acts as a native backup solution, as the full state can be restored based on the events stored there.
When processing observability data for LLM applications, there are many CPU-heavy operations which block the main loop in our Node.js backend, e.g. tokenization and other parsing of event bodies. To achieve high availability and low latencies across client applications, we decided to move the heavy processing into an asynchronous worker container. It accepts events from a Redis queue and ensures that they are eventually being upserted into Clickhouse.
We have released an extensive migration guide for upgrading from Langfuse v2 to v3.
High-level upgrade steps:
Watch this video to get an understanding of the upgrade process:
Please reach out in case you have any questions while upgrading! We tried to make the upgrade as seamless as possible as there are thousands of teams who rely on Langfuse in production.
We used the v3 release as an opportunity to overhaul the self-hosting documentation. It includes all the information you need to know when self-hosting Langfuse and answers to many questions that came up in the community.
Feel free to add to the docs and share any feedback that you might have!
Over the next weeks, we will be adding more deployment templates for different cloud providers (tracking this here for AWS, Google Cloud, Azure). Let us know if any additional documentation would be helpful!
![]()
The v3 thread is by far the most extensive Langfuse discussion thread. Thanks to everyone who contributed to the thread and helped us shape this release.
A special thank you to those who tested v3 ahead of the stable release (v3.0.0-rc*) and provided detailed feedback on the documentation and upgrade process. Thanks for your help and making this process smoother for everyone else!
We are super excited to see what you will build with Langfuse v3 and how it unlocks many new roadmap items that were constrained by Langfuse v2!
π Greetings from the Langfuse HQ, big day here!
Core team celebrating v3 release
The embedded tweet could not be foundβ¦