Announcing Gemini Support!
We're thrilled to release BAML support for Gemini, Google AI's Large Language Model catalog!
The latest Gemini 1.5 Flash and Pro models offer an impressive context window of up to 1 million tokens, provide advanced safety and generation configurations, and support a broader range of multimodal inputs, such as video and audio, outpacing the current capabilities of Anthropic and OpenAI’s public models.
Integrating with Gemini lets BAML users access the latest advancements in large language models, and most importantly, with structure and safety.
How to add Gemini to your BAML Project:
-
Generate a Google Generative AI API key.
-
Add your API key to BAML’s VSCode playground or save it as the
GOOGLE_API_KEY
environment variable. -
Instantiate a BAML client for Gemini:
client<llm> MyGeminiClient { provider google-ai options{ model "gemini-1.5-pro-001" } }
-
Craft your function using the Gemini client:
function GetOrderInfo(email: Email) -> OrderInfo { client MyGeminiClient prompt #" Given the email below: """ from: {{email.from_address}} Email Subject: {{email.subject}} Email Body: {{email.body}} """ Convert the email data to JSON format: {{ ctx.output_format }} Prior to JSON output, describe your reasoning in a step-by-step manner. Here's an example: 'If we think step by step we can see that ... hence the output JSON is: { ... the json schema ... }' "# }
-
And finally, effortlessly refine your prompt and data types in our playground and instantly incorporate them into your Python, TypeScript, and Ruby scripts!