Instant API code: “Gorilla LLM’s” Thought to Terminal Action— Demo in Streamlit App

Instant API code: “Gorilla LLM’s” Thought to Terminal Action— Demo in Streamlit App | by AI TutorMaster | Aug, 2023 | Level Up Coding. Unveiling the Magic of Real-time Code Generation and Execution using OpenAI’s Gorilla LLM and Streamlit — Demo in Streamlit App. Introduction: With the advances in machine learning, specifically in natural language processing (NLP), many platforms and APIs have emerged to deliver high-quality language models. One such platform is OpenAI. This article delves into a code example that showcases the application of OpenAI’s Gorilla LLM (Large Language Model) using Streamlit — a fast and efficient way to create data applications. The key here is set to “EMPTY” for security reasons. The custom API base is configured to a specific Berkeley server. Streamlit UI: Streamlit is used for the front-end interface. The basic layout, text areas, buttons, and the display are all created using Streamlit functions. The fluidity and simplicity of Streamlit make it a popular choice for building such applications. Gorilla Server Querying: A function `get_gorilla_response` has been defined to query the Gorilla Server. It takes in a user prompt and a model variant, sends the request to the server, and fetches the response. Any errors during this communication are caught and reported. Handling Code in the Response: The response from the server might contain code snippets. The `extract_code_from_output` function is designed to pull out these snippets. This is crucial because the primary idea here is not just to generate and display code but also to execute it.