
Today, AI assistants exist primarily as apps on top of existing OS platforms. If you want to use Copilot on Windows, or Gemini on Android, or Siri on Mac, these assistants exist on top of the OS as overlays or floating app windows. We're yet to see what an operating system can be like when it's built from the ground up with AI woven through it.It sounds like this is what Microsoft is preparing to introduce on Windows in the next five years, potentially with a Windows 12. Many top level Microsoft executives have now teased that whatever is next for the platform and computing as a whole, it's going to be a big shift in the wake of AI.I think a lot of people are going to struggle getting their head around the idea of voice being a reliable, primary input method when using a PC, but with agentic AI and the ability for the OS to understand user intent and natural language, it's going to feel a lot more natural than you might think.It's not just Microsoft, either. Apple is also rumored to be working on a new feature for iOS 26 that places voice at the center of the experience. iPhone users will be able to navigate apps just by telling the iPhone what their intent is.On Windows, it's likely that voice will be used in addition to a mouse and keyboard. Instead of there being two primary methods of input, there will be three. Type, touch/mouse, and voice. You likely won't have to use voice to get your work done, but your workflow will be made easier if you do.Of course, there will be a huge concern around privacy. It will take lots of personal user data to make these experiences truly useful, and with the company already claiming that a balance between local compute and cloud compute will be required to make these experiences a reality, I suspect there will be some push back.
Source, Pic: Google, TechSupport
