Highlights
-
Pixie set to elevate AI assistance in Pixel 9.
-
Pixel 9 series to feature Tensor G4 SoCs.
-
Gemini Pro API empowers developers with AI tools.
-
Qi2 wireless charging technology in Pixel 9.
Google’s upcoming Pixel 9 series in 2024 is likely getting a pretty exciting feature which is the rumored debut of a new Pixel-exclusive AI assistant, named Pixie.
This new assistant, as reported by The Information, is expected to redefine how users interact beyond the current Google Assistant.
Pixie: An Advanced AI Assistant
Pixie is anticipated to integrate with various Google products and services, such as Gmail and Maps, to offer highly personalized assistance.
It aims to perform sophisticated multimodal tasks.
For instance, it could direct users to the nearest store for a product they’ve photographed, showcasing an advanced level of contextual understanding and utility.
The codename ‘Pixie’ might change before its final release, but the essence of what it represents—a smarter, more integrated AI assistant—is a significant leap forward in smartphone technology.
While Pixie is expected to be a standout feature in the upcoming Pixel 9 series, Google’s plans for this AI assistant are far-reaching.
The company intends to eventually extend Pixie’s capabilities to lower-end phones and other devices, including wearables.
The rumored AI assistant is tipped to be powered by Gemini Nano, which is Google’s own machine-learning model.
Pixel 9 Series: Anticipated Features
Alongside Pixie, the Pixel 9 series itself is shaping up to be a powerhouse in smartphone innovation.
The phones are likely to be equipped with the Tensor G4 SoCs, codenamed “Zuma Pro,” a successor to the Tensor G3 chipsets in the Pixel 8 series.
Furthermore, the Pixel 9 is also expected to adopt the Qi2 wireless charging technology, positioning itself as one of the first Android phones to embrace this advanced charging standard.
Gemini Pro API for Developers
In a related development, Google has made Gemini available for developers and companies, allowing them to leverage the power of Large Language Models (LLMs).
The Gemini Pro API offers a comprehensive suite of AI tools and models, accessible through Google Cloud’s Vertex AI platform.
This initiative not only encourages developers to create unique applications but also reflects Google’s focus on refining and enhancing AI technology.
Google Pixel 9 Specs
GENERAL
Sim Type | Dual Sim, GSM+GSM |
Dual Sim | Yes |
Sim Size | Nano+eSim |
Device Type | Smartphone |
Release Date | September 19, 2024 (Expected) |
DESIGN
Dimensions | 158.6 x 74.8 x 8.9 mm (6.24 x 2.94 x 0.35 in) |
DISPLAY
Type | Color AMOLED Screen (16M Colors) |
Touch | Yes |
Size | 6.26 inches, 1080 x 2400 pixels, 120 Hz |
Aspect Ratio | 20:9 |
PPI | ~ 420 PPI |
Screen to Body Ratio | ~ 89.3% |
Glass Type | Corning Gorilla Glass Victus Plus |
Features | HDR10+, Always-on Display |
Notch | Yes, Punch Hole |
MEMORY
RAM | 8 GB |
Storage | 256 GB |
Storage Type | UFS 4.1 |
Card Slot | No |
CONNECTIVITY
GPRS | Yes |
EDGE | Yes |
3G | Yes |
4G | Yes |
5G | Yes |
VoLTE | Yes, Dual Stand-By |
Vo5G | Yes |
Wifi | Yes, with wifi-hotspot |
Bluetooth | Yes, v5.3, A2DP, LE, aptX HD |
USB | Yes, USB-C v3.1 |
USB Features | USB Tethering, USB on-the-go, USB Charging |
EXTRA
GPS | Yes, with dual-band A-GPS, GLONASS, GALILEO, QZSS, BDS |
Fingerprint Sensor | Yes, In Display |
Face Unlock | Yes |
Sensors | Accelerometer, Gyro, Proximity, Compass, Barometer, Ambient Light Sensor, Magnetometer |
3.5mm Headphone Jack | No |
Extra | NFC |
Water Resistance | Yes, 1.5 m upto 30 min |
IP Rating | IP68 |
Dust Resistant | Yes |
Extra Features | Stereo Speakers, 3 Microphones, Noise Suppression |
CAMERA
Rear Camera | 50 MP f/1.9 (Wide Angle) 8 MP f/2.2 (Ultra Wide) with autofocus |
Features | Magic Eraser18, Motion Mode, Real Tone, Face Unblur, Panorama, Manual white balancing, Locked Folder, Portrait Mode, Portrait Light, Super Res Zoom, Motion autofocus, Frequent Faces, Dual exposure controls, Live HDR+ |
Video Recording | 4K, 1080p |
Flash | Yes, Dual LED |
Front Camera | Punch Hole 12 MP (Wide Angle) |
Front Video Recording | 1080p @ 30 fps FHD |
TECHNICAL
OS | Android v14 |
Chipset | Google Tensor 4 |
CPU | Octa Core Processor |
GPU | Mali-GU |
IP Rating | IP68 |
Java | No |
Browser | Yes |
MULTIMEDIA
Yes | |
Music | Yes |
Video | Yes |
FM Radio | No |
Document Reader | Yes |
BATTERY
Type | Non-Removable Battery |
Size | 5000 mAh, Li-Po Battery |
Fast Charging | Yes, 80W Fast Charging |
Wireless Charging | Yes, 45W |
Reverse Charging | Yes |
FAQs
How will Pixie enhance the user experience in Pixel 9?
Pixie, the rumored AI assistant for the Pixel 9 series, is designed to integrate with Google services like Gmail and Maps for a personalized experience.
It promises to perform complex tasks, such as guiding users to stores based on photographed products, showcasing a new level of AI interaction and convenience in smartphones.
What can we expect from the hardware of the Pixel 9 series?
The Pixel 9 series is anticipated to be a leap in smartphone technology, powered by the next-gen Tensor G4 SoCs, codenamed “Zuma Pro.”
This chipset is expected to bring enhanced performance and efficiency compared to the Tensor G3 in the Pixel 8 series, making the Pixel 9 a compelling choice for tech enthusiasts.
What is the Gemini Pro API and how does it benefit developers?
Google’s Gemini Pro API, part of the larger Gemini project, offers developers access to advanced AI tools and models through the Google Cloud Vertex AI platform.
This move enables developers to harness the capabilities of Large Language Models, fostering innovation and new applications in various sectors.
Will the Pixel 9 series introduce new charging technology?
The Pixel 9 series is set to be among the first Android phones to adopt the Qi2 wireless charging standard, signaling Google’s commitment to incorporating cutting-edge technologies.
This feature is expected to offer faster and more efficient wireless charging capabilities.
What is the controversy surrounding the Google Gemini AI video?
The Google Gemini AI video has been criticized for misleadingly portraying the AI’s real-time interaction capabilities, which were actually staged using still images and text prompts.
How did Google’s Gemini AI video differ from actual Gemini capabilities?
Unlike the real-time interactions suggested in the video, Gemini’s responses were pre-generated and not a result of live voice communication.
What has been Google’s response to the Gemini AI video criticisms?
Oriol Vinyals, Gemini’s co-lead, acknowledged the editing of the video for brevity but did not initially clarify the actual interaction process used in the demo.
How does the Gemini AI video controversy relate to past Google AI presentations?
This incident mirrors Google’s previous AI demonstration with Duplex at Google I/O 2018, which was later revealed to be a pre-recorded segment rather than a live demo.
What is Google Gemini and how does it enhance Bard?
Google Gemini is a versatile AI model with three variants – Ultra, Pro, and Nano – designed to significantly boost Bard’s reasoning, planning, and understanding capabilities, thus enhancing its efficiency across various platforms.
How is the rollout of Gemini being implemented in Bard?
The implementation of Gemini in Bard is a two-phased approach. Initially, Gemini Pro is being integrated, focusing on English language processing in over 170 countries.
The second phase will introduce Bard Advanced with Gemini Ultra in 2024, offering more advanced data processing capabilities.
What benchmarks did Gemini Pro outperform GPT-3.5 in, and what does it indicate?
Gemini Pro excelled in six out of eight benchmarks, notably in MMLU and GSM8K, surpassing GPT-3.5.
This suggests that while Gemini Pro is catching up to existing AI technologies, it signals Google’s growing competitiveness in the AI market.
What is Gemini Ultra and how does it excel in language understanding?
Gemini Ultra is the most advanced model in the Gemini AI series, specifically designed for Massive Multitask Language Understanding (MMLU).
It boasts exceptional performance across 57 subjects including math, physics, history, law, medicine, and ethics, even surpassing human expert levels in these areas.
This makes Gemini Ultra particularly adept at handling complex, multi-subject queries and tasks.
How does Gemini Pro support developers and enterprise customers?
Gemini Pro offers versatility for a broad spectrum of tasks and is primarily targeted at developers and enterprise customers.
Accessible through the Gemini API in Google AI Studio or Google Cloud Vertex AI, Gemini Pro is integral to powering Google products like the Bard chatbot and the Search Generative Experience.
Its key strengths lie in advanced reasoning, planning, and understanding, making it suitable for sophisticated AI applications in various industries.
What is the purpose of Gemini Nano, and how does it benefit mobile applications?
Gemini Nano is designed for specific, streamlined tasks and is optimized for mobile devices, particularly Android. Its main purpose is to enhance the efficiency and performance of mobile applications.
Android developers will soon be able to incorporate Gemini Nano into their projects, leveraging its capabilities to improve app functionalities and user experiences on mobile platforms.
What is Google Bard AI?
Meet the expected “ChatGPT killer.” Using a set of deep learning algorithms known as “large language models,” the Google Bard AI chatbot can respond to questionnaires provided via text.
The chatbot is built on LaMDA technology and is programmed to use the web to find the most “recent” answers to questions.
An experimental conversational AI service developed by Google, Bard AI learns from its encounters with humans to improve its performance.
Google’s CEO Sundar Pichai introduced the Google Bard AI chatbot in a blog post, showcasing the company’s recent priority on AI.
Pichai has expressed his enthusiasm for adapting cutting-edge AI research and development to real-world problems.
Other organizations, such as OpenAI, have surpassed them in terms of AI developments and applications because of the rate of technology advancement and delayed decision-making at huge firms like Google.
OpenAI concentrated on generating high-quality models and letting people discover their own uses for them, while Google worked on incorporating AI into their existing business plans.
How to use the Google Bard AI chatbot?
If you are chosen as a beta tester, all you have to do to use the Google AI chatbot is open the Google app on your smartphone and tap on the chatbot icon.
Like ChatGPT enter your prompt and hit enter!
How to access Google Bard AI?
Currently, only a small number of people have access to the Google Bard AI link for testing purposes.
To reduce the amount of time and energy spent on computation, Google is developing a “lightweight model version of LaMDA.”
Google Bard AI chatbot, known as Bard, is unfortunately not yet widely available for use.
However, once the Google Bard AI link is shared, it will likely be integrated into Google Search and can be accessed by asking questions through the search bar.
The chatbot draws information from the web to provide up-to-date answers to text prompts.
It is designed to provide easy-to-digest answers to complex questions and can help with tasks such as planning a baby shower, comparing movies, and getting lunch ideas.
It is not yet known if the chatbot will be available through other Google products such as Google Assistant or Google Maps.
Also Read: Google’s Gemini AI Video Sparks Controversy Over Misrepresented Capabilities
Also Read: Google Drops New Update Giving Pixel 8 Pro Gemini AI Capabilities and Enhanced Features