Google is releasing a major update to the Gemini app centered on the Gemini 3 model, redesigned app surfaces and an experimental agent that can carry out multi-step tasks.
What’s included
- Gemini 3: a new, higher-performing model that improves reasoning, response clarity and multimodal understanding.
- Generative interfaces: model-generated interactive layouts (visual layout) and real-time custom UIs (dynamic view) that adapt to a user’s prompt.
- Gemini Agent (experimental): an agentic capability that coordinates across tools to complete multi-step requests, initially rolling out to Google AI Ultra members on the web in the U.S.
Gemini 3: an AI upgrade
Gemini 3 delivers stronger reasoning, cleaner formatting and more concise answers. It raises capabilities for "vibe coding," so apps built with Google’s Canvas feature can include richer functionality. The model advances multimodal understanding, letting the app better interpret images and text together for tasks like homework help or lecture transcription.
Gemini 3 Pro is being rolled out globally; users can select it by choosing the “Thinking” option in the model selector. Google AI Plus, Pro and Ultra subscribers will receive higher usage limits. Google is also extending a free year of Google AI Pro to U.S. college students to provide access to Gemini 3.
A redesigned app and generative interfaces
The Gemini app interface has been refreshed with a cleaner layout, simplified chat entry and a new "My Stuff" folder to find created images, videos and reports more easily. The shopping experience has been enhanced by integrating product listings, comparison tables and prices from Google’s Shopping Graph, which contains billions of product listings.
Google is introducing a new class of model-driven UIs called generative interfaces. Two initial experiments are launching:
- Visual layout: produces immersive, magazine-style presentations that include photos and modular elements; the layout invites user interaction and can be tailored (for example, a multi-day trip itinerary with tappable details).
- Dynamic view: uses Gemini 3’s agentic coding abilities to generate a custom interactive interface on the fly, suited to the user’s request (for example, an interactive guided tour of an art gallery with contextual details).
These experiments are being rolled out gradually and users may initially see one experiment for comparison purposes.
Gemini Agent: handling multi-step tasks
Gemini Agent is an experimental feature that can execute multi-step workflows inside the Gemini app by connecting to Google Workspace apps and other tools. It can manage Calendar events, set reminders, prioritize inbox items and draft replies for user review. Users can also provide detailed instructions-e.g., research and arrange a rental car within budget using information from email-and Gemini Agent will gather relevant details and prepare options.
Built on learnings from Project Mariner and powered by Gemini 3’s reasoning, Gemini Agent uses tools such as Deep Research, Canvas, connected Google Workspace apps (like Gmail and Calendar) and live web browsing. The agent is designed to request confirmation before critical actions (like purchases or sending messages) and lets users intervene at any time. It is being made available on the web to Google AI Ultra subscribers in the U.S. initially.
All changes align with Google’s goal to develop a more personal, proactive and capable assistant experience.