Google has delivered updates for developers that improve adaptive UI design, AI‑assisted creativity, and faster iteration across Android.
New tools arriving across Jetpack Compose, Android Studio, and a playful Androidify app show the company aligning design, development, and deployment for phones, tablets, foldables, and beyond.
Compose Adaptive Layouts 1.2 enters beta for developers exploring new Android form factors
Google’s foundational UI toolkit for adaptive apps has taken a step forward with a beta release focused on bigger canvases and flexible layouts. As foldables like the Pixel 10 Pro Fold enter the mainstream and tablets continue their resurgence, Compose’s adaptive features are central to creating responsive interfaces that make better use of space.
“To help you build these dynamic experiences more efficiently, we are announcing that the Compose Adaptive Layouts Library 1.2 is officially entering beta,” explained Google.
The release introduces strategies such as reflow and levitate to help Android developers move fluidly from single to multi‑pane layouts and to create polished transitions between device postures. It also adds built-in support for ‘Large’ and ‘Extra‑Large’ window size classes, establishing sensible breakpoints for richer UI changes on expansive displays.
Google notes that “we see that users who use an app on both their phone and a larger screen are almost three times more engaged.” The company urges teams to go beyond stretching single‑column views to adopt patterns like list and detail panes shown side-by-side, which reduce taps and speed up workflows.
With more than 500 million large‑screen Android devices in the market, and forward‑looking concepts such as Connected Displays in developer preview, the opportunity for multi‑instance and desktop‑class features is growing fast.
Androidify reimagines personalisation with Gemini and Compose
Alongside tooling, Google has launched Androidify, a new app and web experience that turns selfies and prompts into customised Android bot avatars.
“Androidify is our new app that lets you build your very own Android bot, using a selfie and AI.” It uses the Firebase AI Logic SDK to tap into Gemini and a fine‑tuned version of Imagen for image generation, plus ML‑powered camera smarts.
The flow begins with safety and quality checks. Gemini 2.5 Flash validates that an image contains a clear, focused person and meets standards before processing. It then captions the image using structured JSON output, which feeds into the final generation step where Imagen 3 – fine‑tuned to Androidify’s playful aesthetic – creates the bot. A ‘Help me write’ option powered by Gemini 2.5 Flash can suggest clothing and hairstyle descriptions for an “I’m feeling lucky” twist.
“The app’s user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors,” explains Google.
Androidify takes advantage of Material 3’s expressive design language with new shapes, motion schemes, and custom animations. CameraX works with ML Kit Pose Detection to detect when a person is in view, enable capture at the right moment, and add visual guides. Support for foldables extends to tabletop mode.
“Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background ‘vibe’ to bring the Android bots to life,” says Google. Through Firebase AI Logic, the app passes a prompt describing the scene along with the bot image and instructions on how to compose them.
For sharing, a “Sticker mode” removes backgrounds for PNG exports that work in sticker‑supporting apps. “The app also includes a ‘Sticker mode’ option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot,” Google adds.
Other Compose enhancements also feature in the app. It uses WindowSizeClass and reusable composables for adaptive layouts across phones, tablets, and foldables. Navigation 3 enables shared element transitions and morphing shapes for smooth screen changes. Compose 1.8’s auto‑sizing text helps fit hero copy neatly within containers.
Android Studio Narwhal 3 ‘Feature Drop’ quickens releases for cutting-edge developers
On the IDE front, Google has released the third Feature Drop for Android Studio Narwhal. The company is now shipping updates monthly, making it easier for developers to adopt incremental improvements.
AI assistance in Studio continues to deepen. AGENTS.md files can now live alongside code to give Gemini project‑level context, style rules, and guidance. “AGENTS.md is a Markdown file that lets you provide project-specific instructions, coding style rules, and other guidance that Gemini automatically uses for context,” explains Google.
Gemini also graduates image attachments and the @file context drawer from Labs to stable, allowing developers to add design mock‑ups, screenshots, and source files for more accurate responses.
Faster UI iteration is a theme of this Feature Drop. Android developers can enter Focus mode, drag the preview edges to test breakpoints, and save a specific size as a new @Preview with one click. It is a boon for multi‑device testing without leaving the IDE.
“Building responsive UIs just got easier: Compose Preview now supports dynamic resizing, giving you instant visual feedback on how your UI adapts to different screen sizes,” says Google.
Beyond AI and preview, Studio brings practical developer‑experience upgrades. Backup and Restore tools make it easier to test data migration as users move to new devices, and backups can be attached to run configurations. Play Policy Insights surface lint warnings tied to Google Play requirements, with links to guidance so teams can fix issues early and add checks to CI. The Proguard editor warns about overly broad keep rules that limit R8 optimisation.
Large projects also benefit from UI and build control changes. The Android view can display build files directly under their modules, improving navigation in multi‑module codebases. Teams can now manage Gradle sync timing to avoid interruptions.
A coherent push toward adaptive, AI‑assisted Android
Taken together, these updates form a cohesive narrative. Compose Adaptive Layouts 1.2 beta equips developers to build interfaces that scale gracefully across phones, tablets, and foldables. Androidify demonstrates Gemini and Imagen working with Firebase AI Logic to deliver consumer‑friendly creativity powered by Compose and ML. Studio Narwhal’s monthly cadence and feature set make building, previewing, and shipping adaptive UIs faster, with AI woven into the workflow.
For Android developers, some of the practical takeaways are:
- Invest in multi‑pane patterns and window size classes to deliver on large screens.
- Use Compose’s latest capabilities, from auto‑sizing text to shared element transitions, to raise the bar on polish.
- Lean on Studio’s AGENTS.md and context attachments to get more from Gemini, and use resizable Compose Preview to harden layouts before hitting devices.
- Test data continuity with backup and restore, track Play policies with lint, and keep builds smooth with manual sync and better project views.
With foldables like the Pixel 10 Pro Fold joining Samsung’s Galaxy Z line and a growing installed base of large‑screen devices, adaptive design is no longer a niche consideration. It is a route to happier users, better engagement, and feature depth across the ecosystem.
As Google puts it, “Embracing an adaptive mindset is more than a best practice, it’s a strategy for growth.”
See also: Google to mandate verification for all Android app developers

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.