-
Notifications
You must be signed in to change notification settings - Fork 345
feat(ai): add vercel ai integration #5858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Overall package sizeSelf size: 11.86 MB Dependency sizes| name | version | self size | total size | |------|---------|-----------|------------| | @datadog/libdatadog | 0.7.0 | 35.02 MB | 35.02 MB | | @datadog/native-appsec | 10.1.0 | 20.37 MB | 20.37 MB | | @datadog/native-iast-taint-tracking | 4.0.0 | 11.72 MB | 11.73 MB | | @datadog/pprof | 5.9.0 | 9.77 MB | 10.14 MB | | @opentelemetry/core | 1.30.1 | 908.66 kB | 7.16 MB | | protobufjs | 7.5.3 | 2.95 MB | 5.6 MB | | @datadog/wasm-js-rewriter | 4.0.1 | 2.85 MB | 3.58 MB | | @datadog/native-metrics | 3.1.1 | 1.02 MB | 1.43 MB | | @opentelemetry/api | 1.8.0 | 1.21 MB | 1.21 MB | | jsonpath-plus | 10.3.0 | 617.18 kB | 1.08 MB | | import-in-the-middle | 1.14.2 | 122.36 kB | 850.93 kB | | lru-cache | 10.4.3 | 804.3 kB | 804.3 kB | | source-map | 0.7.4 | 226 kB | 226 kB | | opentracing | 0.14.7 | 194.81 kB | 194.81 kB | | pprof-format | 2.1.0 | 111.69 kB | 111.69 kB | | @datadog/sketches-js | 2.1.1 | 109.9 kB | 109.9 kB | | lodash.sortby | 4.7.0 | 75.76 kB | 75.76 kB | | ignore | 7.0.5 | 63.38 kB | 63.38 kB | | istanbul-lib-coverage | 3.2.2 | 34.37 kB | 34.37 kB | | rfdc | 1.4.1 | 27.15 kB | 27.15 kB | | dc-polyfill | 0.1.10 | 26.73 kB | 26.73 kB | | @isaacs/ttlcache | 1.4.1 | 25.2 kB | 25.2 kB | | tlhunter-sorted-set | 0.1.0 | 24.94 kB | 24.94 kB | | shell-quote | 1.8.3 | 23.74 kB | 23.74 kB | | limiter | 1.1.5 | 23.17 kB | 23.17 kB | | retry | 0.13.1 | 18.85 kB | 18.85 kB | | semifies | 1.0.0 | 15.84 kB | 15.84 kB | | jest-docblock | 29.7.0 | 8.99 kB | 12.76 kB | | crypto-randomuuid | 1.0.0 | 11.18 kB | 11.18 kB | | ttl-set | 1.0.0 | 4.61 kB | 9.69 kB | | mutexify | 1.4.0 | 5.71 kB | 8.74 kB | | path-to-regexp | 0.1.12 | 6.6 kB | 6.6 kB | | koalas | 1.0.2 | 6.47 kB | 6.47 kB | | module-details-from-path | 1.0.4 | 3.96 kB | 3.96 kB |🤖 This report was automatically generated by heaviest-objects-in-the-universe |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #5858 +/- ##
==========================================
+ Coverage 81.90% 83.08% +1.18%
==========================================
Files 376 392 +16
Lines 16739 17343 +604
==========================================
+ Hits 13710 14410 +700
+ Misses 3029 2933 -96 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
BenchmarksBenchmark execution time: 2025-08-14 15:30:41 Comparing candidate commit 017d15e in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 1270 metrics, 53 unstable metrics. |
Datadog ReportBranch report: ✅ 0 Failed, 1257 Passed, 0 Skipped, 20m 11.47s Total Time |
…r/vercel-ai-sdk-integration
…r/vercel-ai-sdk-integration
…r/vercel-ai-sdk-integration
…r/vercel-ai-sdk-integration
Hey folks! I'm excited for this PR! When do you expect it will be merged? Thanks! |
same here, we're waiting to see this merged so we can start using Datadog's LLM product, thank you! Are you expecting this is going to work on AI SDK v5 which is currently in Beta? |
…r/vercel-ai-sdk-integration
Hi @ctamulonis-glossy @gabrielsch! Yes we are looking to get this merged soon (within the next couple of weeks), and it should support AI SDK v5 😄 |
…Dog/dd-trace-js into sabrenner/vercel-ai-sdk-integration
…r/vercel-ai-sdk-integration
* add vercel ai integration with otel processing * add some typedocs and comments * fix tagger test * rename to 'ai' * try doing with a custom tracer * change up implementation slightly * codeowners * undo id changes * get rid of otel span start/end publishes * revert llmobs tagger test change * add better noop default tracer and esm support * delete util file * add initial test skeleton * fix duplicate wrapping * simplify patching * apm tests * write some tests * add rest of llmobs tests * add ci job * fix node version and import issue with check and externally defined version * fix metadata tagging * handle tool rolls * remove import * add default return and docstring for formatMessage * fix tool message tests * add model name and provider tags to apm tracing * some self review * address some review comments * do not stub for tests, instead use dummy test agent * move cassettes to local directory to fix tests * configurable flush interval for tests * use separate image for ai tests that have local cassettes attached * use different port * move test flush interval back local * change in esm test * Revert "move test flush interval back local" This reverts commit 39ccf68. * Revert "use different port" This reverts commit ac4071c. * Revert "use separate image for ai tests that have local cassettes attached" This reverts commit 72eca83. * Revert "move cassettes to local directory to fix tests" This reverts commit 20c3c63. * Revert "change in esm test" This reverts commit 32d75a6. * remove env var from llmobs workflow * fix type hint for test util * add test cassettes * re-trigger ci * more review fixes * try removing configuration * revert supported config undoing * add ai to versions package.json * add @ai-sdk/openai to versions package.json * fix tests * remove unrelated change * clean up tests & versions more to not use zod directly * require withVersions directly * forgotten withversions * change ci for ai job to use node latest
* add vercel ai integration with otel processing * add some typedocs and comments * fix tagger test * rename to 'ai' * try doing with a custom tracer * change up implementation slightly * codeowners * undo id changes * get rid of otel span start/end publishes * revert llmobs tagger test change * add better noop default tracer and esm support * delete util file * add initial test skeleton * fix duplicate wrapping * simplify patching * apm tests * write some tests * add rest of llmobs tests * add ci job * fix node version and import issue with check and externally defined version * fix metadata tagging * handle tool rolls * remove import * add default return and docstring for formatMessage * fix tool message tests * add model name and provider tags to apm tracing * some self review * address some review comments * do not stub for tests, instead use dummy test agent * move cassettes to local directory to fix tests * configurable flush interval for tests * use separate image for ai tests that have local cassettes attached * use different port * move test flush interval back local * change in esm test * Revert "move test flush interval back local" This reverts commit 39ccf68. * Revert "use different port" This reverts commit ac4071c. * Revert "use separate image for ai tests that have local cassettes attached" This reverts commit 72eca83. * Revert "move cassettes to local directory to fix tests" This reverts commit 20c3c63. * Revert "change in esm test" This reverts commit 32d75a6. * remove env var from llmobs workflow * fix type hint for test util * add test cassettes * re-trigger ci * more review fixes * try removing configuration * revert supported config undoing * add ai to versions package.json * add @ai-sdk/openai to versions package.json * fix tests * remove unrelated change * clean up tests & versions more to not use zod directly * require withVersions directly * forgotten withversions * change ci for ai job to use node latest
hey folks - for all watching this PR, auto-instrumentation support for Vercel's AI SDK |
* add vercel ai integration with otel processing * add some typedocs and comments * fix tagger test * rename to 'ai' * try doing with a custom tracer * change up implementation slightly * codeowners * undo id changes * get rid of otel span start/end publishes * revert llmobs tagger test change * add better noop default tracer and esm support * delete util file * add initial test skeleton * fix duplicate wrapping * simplify patching * apm tests * write some tests * add rest of llmobs tests * add ci job * fix node version and import issue with check and externally defined version * fix metadata tagging * handle tool rolls * remove import * add default return and docstring for formatMessage * fix tool message tests * add model name and provider tags to apm tracing * some self review * address some review comments * do not stub for tests, instead use dummy test agent * move cassettes to local directory to fix tests * configurable flush interval for tests * use separate image for ai tests that have local cassettes attached * use different port * move test flush interval back local * change in esm test * Revert "move test flush interval back local" This reverts commit 39ccf68. * Revert "use different port" This reverts commit ac4071c. * Revert "use separate image for ai tests that have local cassettes attached" This reverts commit 72eca83. * Revert "move cassettes to local directory to fix tests" This reverts commit 20c3c63. * Revert "change in esm test" This reverts commit 32d75a6. * remove env var from llmobs workflow * fix type hint for test util * add test cassettes * re-trigger ci * more review fixes * try removing configuration * revert supported config undoing * add ai to versions package.json * add @ai-sdk/openai to versions package.json * fix tests * remove unrelated change * clean up tests & versions more to not use zod directly * require withVersions directly * forgotten withversions * change ci for ai job to use node latest
What does this PR do?
Adds APM and LLM Observability support for
ai@4.0.0
and greater.DISCLAIMER: Most LOC are from "cassettes" added locally used to mock and play back locally-recorded responses from provider APIs. These are stripped of any sensitive information/headers in the
ddapm-test-agent
image.The Vercel AI SDK provides OTel tracing of their operations under the hood. This gives us a nice "in" to patch the tracer used to intercept the
startActiveSpan
function, and various operations on the underlying span, and translate them into APM and LLM Observability spans.This integration works by doing exactly that - patching the tracer passed in, and if none is passed in, using a default one and enabling experimental telemetry so that the underlying Vercel AI SDK automatically uses this tracer.
Some implementation details:
workflow
,llm
,embedding
, andtool
are applicable)Additional changes unrelated to the user-facing feature include:
useLlmobs
hook that will provide agetEvents
function to get APM spans and pre-encoded LLMObs span events. This is just a nice-to-have that can be used in the other integrations as well.Motivation
Closes #5410
MLOB-2980