Langfuse’s cover photo
Langfuse

Langfuse

Software Development

Open Source LLM Engineering Platform

About us

Langfuse is the 𝗺𝗼𝘀𝘁 𝗽𝗼𝗽𝘂𝗹𝗮𝗿 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝗢𝗽𝘀 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be 𝘀𝗲𝗹𝗳-𝗵𝗼𝘀𝘁𝗲𝗱 in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance. Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use 𝗟𝗮𝗻𝗴𝗳𝘂𝘀𝗲 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁, 𝗶𝘁𝘀 𝗼𝗽𝗲𝗻 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to improve the quality of their applications. Product managers can 𝗮𝗻𝗮𝗹𝘆𝘇𝗲, 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗱𝗲𝗯𝘂𝗴 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring 𝗵𝘂𝗺𝗮𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀 through security framework and evaluation pipelines. Langfuse enables 𝗻𝗼𝗻-𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿𝘀 to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing. Langfuse is 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!

Website
https://langfuse.com
Industry
Software Development
Company size
2-10 employees
Headquarters
San Francisco
Type
Privately Held
Founded
2022
Specialties
Langfuse, Large Language Models, Observability, Prompt Management, Evaluations, Testing, Open Source, LLM, AI, Analytics, Open Source, and Artificial Intelligence

Products

Locations

Employees at Langfuse

Updates

  • 🚀 Big news: We're open sourcing ALL product features under the MIT license! Today marks a major milestone in our journey to build the leading open source LLM Engineering Platform. We're releasing the following features under the MIT License: ✅ LLM-as-a-Judge Evaluations ✅ Annotation Queues ✅ Prompt Experiments ✅ Playground ✅ And more... 𝗪𝗵𝘆 𝗮𝗿𝗲 𝘄𝗲 𝗱𝗼𝗶𝗻𝗴 𝘁𝗵𝗶𝘀? The LLM landscape changes every week. Developers need a platform they can trust, extend, and iterate on without barriers. By removing commercial gates from core product features, we're fostering deeper community collaboration and accelerate development speed. 𝗧𝗵𝗲 𝗻𝘂𝗺𝗯𝗲𝗿𝘀 𝘀𝗽𝗲𝗮𝗸 𝗳𝗼𝗿 𝘁𝗵𝗲𝗺𝘀𝗲𝗹𝘃𝗲𝘀: >7,000,000 monthly SDK installs, > 5,500,000 Docker pulls, and 8,000+ monthly active self-hosted Langfuse instances that we know of. This is our commitment to the developer community - your data stays yours, your workflows stay flexible, and your platform stays open. 👉 Star us on GitHub 👉 Deploy with our new Terraform modules 👉 Join the conversation on our roadmap We're incredibly grateful to work with such an amazing community. The future of LLM engineering is open. Let's build it together.

    • No alternative text description for this image
  • Langfuse reposted this

    View profile for Anton Tcholakov

    Founding Engineer | Building in AI x Human Connection

    🦎 Introducing the Reptyl Stack: A production-grade foundation for resilient AI applications After months of building complex AI workflows at Pond Labs, I found that libraries like LangGraph make it easy to prototype AI features, but fall short in production. The Reptyl Stack tackles the problems we encountered head-on and includes: 🚀 Restate - Durable execution that makes AI workflows automatically resilient 📊 Langfuse - AI-specific observability (token tracking, cost monitoring, prompt versioning) Why this matters: - AI agents can execute complex, reversible operations (like booking travel) with automatic rollback - Human-in-the-loop workflows suspend gracefully without consuming resources - Multi-agent systems coordinate reliably with built-in fault tolerance 🔥 Hot reload everywhere for rapid iteration and great developer experience. The stack is intentionally flexible—start with this foundation, then add vector databases, specialised AI tools, or additional infrastructure as needs evolve. Built from real experience shipping MeetQu, our AI-powered meeting prep tool. The template is open source and ready to use. Links to the introductory blog post and the GitHub repo in the comments. #AI #SoftwareEngineering #OpenSource

  • New Integration: Trace LiveKit voice AI Agents with Langfuse LiveKit is an open-source Python and Node.js framework for building production-grade multimodal and voice AI agents. This integration captures traces for key agent activities in Langfuse, like session management, agent turns, LLM node executions, and more, leveraging OpenTelemetry data. Link in comments to get started.

    • No alternative text description for this image
  • 💬 New Slack Integration: Stay informed about your prompt changes directly in Slack! Langfuse now offers a native integration that sends prompt change notifications right to your Slack channels. Whether you're monitoring production or ensuring team coordination, this feature keeps everyone updated. Check out the details on our changelog (link in comments)

    • No alternative text description for this image
  • ⏯️ New in Langfuse: The LLM Playground now features side-by-side prompt comparison, allowing for parallel or selective execution of prompts. Configure independently, test variations simultaneously, and directly save successful prompts to your project. 🔗 Link in Comments to learn more

  • 📊 More Langfuse Dashboards Improvements: CSV Download for Charts You can now download metrics from any chart in your custom dashboards as CSV files, making it easier for further analysis and reporting. Hover over any chart and click the download button to receive your CSV file. This simplifies creating reports and integrating dashboard data into existing workflows. 🔗 Link in comments for more details.

  • 📊 New in Langfuse : Histogram Charts in Custom Dashboards We added histogram charts as a new visualization option in Langfuse custom dashboards. They help you better understand the distribution of your data, highlighting patterns, outliers, and trends in your LLM metrics. Perfect for latency analysis, cost patterns, score distributions, and more! Create your own histogram in a few easy steps: 1. Dashboards → Widgets → New Widget 2. Select data source 3. Choose Histogram as your chart type 4. Select metric to visualize 5. Apply filters, save, and add to your dashboard 🔗 Explore more in our changelog (link in comments)

    • No alternative text description for this image
  • 🧩 New in Langfuse: Prompt Version Webhooks

    View profile for Max Deichmann

    Building langfuse.com (YC W23)

    New in Langfuse: Prompt Version Webhooks 🪝 You can now automatically receive HTTP notifications when prompt versions are created, updated, or deleted in your Langfuse project. Common use cases: – Trigger CI/CD pipelines on prompt changes – Sync with GitHub or other systems – Notify teammates of updates Includes filtering, signature verification, and retry handling. 🔗 Link in comments with more information.

    • No alternative text description for this image

Similar pages

Browse jobs

Funding