Watch our (mobile) app stores!
Dynamic web apps are back, with help of AI.
In fact in service of AI provides: Google, OpenAI, (Anthropic) MCP.
The future is all about AI-generated user interfaces (says Google) - YouTube by Maximilian Schwarzmüller
Google got a new project! Okay ... that happens frequently. But it's a project that's in-line with other industry efforts to dynamically generate purpose-built UIs with AI. Kind of like new "mini app stores" that might end up being not so mini.
A2UI is an open-source project, complete with a format optimized for representing updateable, agent-generated UIs and an initial set of renderers, that allows agents to generate or populate rich user interfaces, so they can be displayed in different host applications, rendered by a range of UI frameworks such as Lit, Angular, or Flutter (with more to come). Renderers support a set of common components and/or a client advertised set of custom components which are composed into layouts. The client owns the rendering and can integrate it seamlessly into their branded UX. Orchestrator agents and remote A2A subagents can all generate UI layouts which are securely passed as messages, not as executable code.
Our framework to build apps for ChatGPT.
With Apps SDK, MCP is the backbone that keeps server, model, and UI in sync. By standardising the wire format, authentication, and metadata, it lets ChatGPT reason about your app the same way it reasons about built-in tools.
"proposal for the MCP Apps Extension (SEP-1865) to standardize support for interactive user interfaces in the Model Context Protocol.
This extension addresses one of the most requested features from the MCP community and builds on proven work from MCP-UI and OpenAI Apps SDK - the ability for MCP servers to deliver interactive user interfaces to hosts.
MCP Apps Extension introduces a standardized pattern for declaring UI resources, linking them to tools, and enabling bidirectional communication between embedded interfaces and the host application."