Highlights
- OpenAI is reportedly working on a consumer device focused on voice interaction, expected in late 2026 or early 2027.
- A new audio AI model debuting in early 2026 will feature natural speech, interruption handling and simultaneous conversation.
- OpenAI is pursuing screenless, AI-native hardware like smart speakers or wearables, positioning itself in the voice assistant market.

(Photo by Andrew Neel on Unsplash)
OpenAI is reportedly doubling down on audio-focused artificial intelligence as it prepares for the launch of a new consumer device that prioritises voice interaction instead of traditional screens. The company is said to be reorganising internal teams and upgrading its voice technologies to support this next phase of its product strategy.
According to a recent report published by The Information, OpenAI has consolidated its engineering, product, and research teams over the past two months to accelerate progress in audio AI. The move is aimed at strengthening the company’s voice models, which are currently considered less advanced than its text-based systems that power ChatGPT.
People familiar with the matter told The Information that this renewed focus on audio is closely linked to OpenAI’s plans for a new consumer hardware product described as “largely audio-based.” The device is expected to arrive in roughly a year, potentially launching in late 2026 or early 2027.
It may also be part of a broader lineup that could include smart glasses or screenless smart speakers designed to act as AI companions rather than conventional gadgets.
A major step in this initiative is the planned debut of a new, more advanced audio model in early 2026, possibly by the end of the first quarter. The report claims this model will deliver more natural-sounding speech, improved handling of interruptions, and the ability to talk at the same time as the user.
These features are expected to go beyond current voice AI systems, which often struggle with overlapping speech and real-time conversational flow.
OpenAI’s hardware ambitions gained further traction following its acquisition of io Products in May 2025. The startup, founded by former Apple design chief Jony Ive, was reportedly bought for around $6.5 billion. Ive and his team are now playing a key role in shaping OpenAI’s design direction with a clear focus on reducing reliance on screens and addressing shortcomings seen in earlier consumer devices. Observers suggest that an audio-first product aligns with Ive’s vision of less addictive, more ambient computing experiences.
While concrete details about the device’s form factor remain scarce, earlier leaks have hinted at anything from a desk-based unit to wearable hardware. The emphasis on voice interaction echoes CEO Sam Altman’s criticism of “bolting AI onto existing” products. Instead, OpenAI appears to be pursuing AI-native designs built from the ground up.
As of now, OpenAI has not officially commented on these developments. If successful, this strategy could put OpenAI in direct competition with established voice assistant players such as Apple and Google.
FAQs
Q1. What kind of device is OpenAI planning to launch?
Answer. OpenAI is working on a new consumer hardware product described as “largely audio-based,” focusing on voice interaction instead of traditional screens.
Q2. When is the OpenAI device expected to arrive?
Answer. The audio-first device is tipped to launch in late 2026 or early 2027, following the debut of a new advanced audio model in early 2026.
Q3. Why did OpenAI acquire io Products, and how does it fit into this plan?
Answer. OpenAI bought io Products (founded by Jony Ive) for around $6.5 billion to shape its design direction, emphasising screenless, ambient computing experiences that align with Ive’s vision of less addictive, AI-native devices.
