AI Engineer · April 10, 2026

Running LLMs locally: Practical LLM Performance on DGX Spark — Mozhgan Kabiri chimeh, NVIDIA

Running LLMs locally: Practical LLM Performance on DGX Spark — Mozhgan Kabiri chimeh, NVIDIA video thumbnail
Why it matters

AI Engineer session on Running LLMs locally: Practical LLM Performance on DGX Spark, presented by Mozhgan Kabiri chimeh, NVIDIA. It adds practical context for how teams are building and operating AI systems in production.

My takeaway: Useful for AI engineering because it grounds model adoption in concrete developer workflow, tooling, and product tradeoffs.