How to use Hugging Face Models with Semantic Kernel

thumbnail

How to use Hugging Face Models with Semantic Kernel

In this blog post, we learn how to integrate Semantic Kernel with Hugging Face models to leverage the power of over 190,000+ models from Hugging Face combined with the latest advancements in Semantic Kernel’s orchestration, skills, planner and contextual memory support.

What is Hugging Face?

Hugging Face is a leading provider of open-source models that offer great efficiencies as models are pre-trained, easy to swap out, and cost-effective with many free models available.

How to use Semantic Kernel with Hugging Face?

To use Semantic Kernel with Hugging Face, we start by creating a kernel instance and configuring the Hugging Face services we want to use, such as gp2 for text completion and sentence-transformers/all-MiniLM-L6-v2 for text embeddings. Then, we define the text memory skill which we use for this example.

Next, we define the fact memories we want the model to reference as it provides us responses. In this example, we have facts about animals, but users are free to edit and get creative as they test this out for themselves. We also create a prompt response template that provides the details on how to respond to our query.

Finally, we define the query parameters, relevancy, and return our output to get our response. With this integration, users can enjoy the accessibility of Hugging Face models and the latest advancements in Semantic Kernel's orchestration, skills, planner, and contextual memory support to create powerful and efficient NLP applications. Happy testing!