Webinar

Quickly build and deploy LLMs with Retrieval Augmented Generation, featuring Intel Accelerators and AzureAgéndalo en tu calendario habitual ¡en tu horario!

Martes, 28 de enero de 2025, de 10.00 a 11.00 hs Horario de Ohio (US)
Webinar en inglés

RAG offers a new way to maximize the capabilities of large language models (LLMs) to produce more accurate, context-aware, and informative responses. Join Akash Shankaran (Intel), Ron Abellera (Microsoft), and Juan Pablo Norena (Canonical) for a tutorial on how RAG can enhance your LLMs. The session will explore how to optimize LLMs with RAG using Charmed OpenSearch, which can serve multiple services like data ingestion, model ingestion, vector database, retrieval and ranking, and LLM connector. We will also show the architecture of our RAG deployment in Microsoft® Azure Cloud. Notably, the vector search capabilities of RAG are enhanced by Intel AVX® Acceleration, delivering faster processing and high-throughput performance for the RAG workflow.

¿Le gustaría hacer webinars o eventos online con nosotros?
Sponsors
No hay sponsors para este webinar.


Cerrar