
CES 2026 Challenges: Exploring the Hype and Divergence Behind 50 AI Projects
The annual tech extravaganza has arrived once again, and this year’s most significant keyword is still AI. Since its emergence, various industries have been exploring how to integrate AI capabilities and the next groundbreaking product. The summary of CES 2026 presents two main conclusions: first, industries are indeed integrating AI, creating a dazzling array of innovations. However, second, I personally did not encounter any standout AI-native products; the much-anticipated “iPhone Moment” remains distant. In this article, we aim to showcase the projects and highlights from this year’s CES, allowing readers to explore the essence of CES 2026 from the comfort of their homes.
1. Humanoid Robots: A Surge in Hardware and Algorithms
First, let’s look at the most talked-about humanoid robots at this year’s CES. There are indeed many robots on display. Utree Robotics continues to capture attention, marking its seventh year at the exhibition. Each year, we observe Utree’s advantages and progress in humanoid robot hardware. Recently, videos showcasing Utree’s martial arts performances and acrobatics during concerts have gone viral. According to Utree’s official data, their shipments exceeded 5,500 units in 2025. While Utree’s booth at CES was modest, we eagerly anticipate what new tricks the company will unveil in 2026 and whether there will be breakthroughs in robotic intelligence.
Among the humanoid robots, I found the one from Sharpa, a Singaporean company, particularly impressive. Renowned for its dexterous hand, it features 22 active degrees of freedom, achieving a 1:1 ratio with the human hand’s freedom of movement. It is claimed to be capable of performing nearly all tasks a human can accomplish, earning it the nickname “the Rolls-Royce of dexterous hands,” with a price tag of $50,000. However, I was discreetly informed by staff that there is currently one task this hand cannot perform. Alicia Veneziani, Vice President of Sharpa, mentioned: “We still cannot…” Fingers crossed. Besides the dexterous hand, Sharpa showcased quick-reaction ping-pong tasks, a card-dealing task using multimodal reasoning technology, and the creation of paper windmills through complex planning. They even snapped my first-ever selfie with a robot! These tasks convey that Sharpa is not just about hardware but has built a comprehensive tech system from hardware to software algorithms.
Moreover, Alicia highlighted that robots currently cannot maintain high precision with dexterous hands while also walking on two legs. Some movements require upper body coordination, but when robots overly rely on their legs, their upper body can easily lose balance. “Our core goal is to solve manipulation challenges, so we prioritize the tactile feedback system in the hands and have developed an AI model to integrate this data in real-time, ensuring precise and reliable object manipulation.” Within the many humanoid robots showcased at CES, some are adopting low-cost or even open-source strategies to target specific task scenarios for commercial viability. Meanwhile, companies like Sharpa focus on exploring the boundaries of technology, which presents a significant developmental path, but avoiding the pitfalls faced by “Boston Dynamics” is a challenge worth noting.
Speaking of Boston Dynamics, the ‘industry veteran’ has remained relatively low-profile for some time. In 2020, the company was acquired by Hyundai Motor Group for $880 million, obtaining an 80% stake despite its slow commercial prospects. This year at CES, Boston Dynamics publicly unveiled its bipedal humanoid robot, Atlas. A representative stated, “This is our third-generation robot aimed for the market. From Spot to Stretch, we’ve accumulated considerable experience in commercialization, and now we have thousands of robots actively working.” They believe their strength lies in understanding real customer scenarios and what products need to deliver genuine value. “We believe we make the best robots because Boston Dynamics has been in this space for 30 years, gathering top robotic experts,” the spokesperson added.
During a discussion, Chen Qian, Co-Founder of Silicon Valley 101, asked, “What is the most important lesson you’ve learned?” The spokesperson replied, “Simply making a ‘cool-looking’ robot isn’t enough. Even if the robot can work, that’s still insufficient. You must consider comprehensive aspects for customers: how to provide after-sales service? How to reduce robot costs through manufacturing optimization? This holistic thinking has earned us the reputation of being a ‘reliable robot provider,’ and every technology we demonstrate is practical and implementable.” The design of Atlas is intended for “working” in factories, featuring fully rotating joints, and standing at 1.9 meters tall with a wingspan of 2.3 meters, possessing 56 degrees of freedom. It is reportedly already operational in Hyundai’s factories. Additionally, Boston Dynamics announced a partnership with Google’s DeepMind, aiming to integrate the Gemini Robotics AI base model with Atlas. I was surprised that such a large company did not utilize its own foundational model for AI development, so I inquired about this.
When asked why they didn’t establish an AI lab internally, the spokesperson clarified that they have already developed several AI models independently. However, for true success in AI, scale and top-tier technology are essential. “Currently, we believe Google’s Gemini model is the most advanced, with data and information scales that we cannot achieve alone. Through this collaboration, we are confident that we can achieve significant breakthroughs in the future.” Boston Dynamics has finally abandoned the long-criticized hydraulic system, moving from a ‘show-off’ approach to embracing a scalable, industrial direction. After numerous setbacks, this once-renowned humanoid robot company has taken a pragmatic step forward. However, with fierce competition in humanoid robotics, including Chinese manufacturers with high hardware price advantages and various algorithm-driven Silicon Valley companies, it remains to be seen what market advantages Boston Dynamics has left.
2. Companion Robots: More Than Just Human Interaction
Compared to humanoid robots, companion robots have already made significant strides, especially with multimodal capabilities creating a vast market, primarily dominated by Chinese companies. At CES 2026, we witnessed an explosion of such products. Firstly, we encountered Kata Friends, characterized by its adorable design. It incorporates a multimodal perception system with visual, auditory, and tactile capabilities, recognizing user expressions, commands, and physical contact. It actively seeks affection, presenting itself as a pet focused on emotional interactions. When I visited their booth, I was informed that it had worked all day and was currently charging, hence its lack of active interaction. However, official materials showcased its cuteness.
In contrast, the previously popular pet companion product, Mirumi, which gained attention at last year’s CES, appears somewhat overshadowed this year. In 2025, it became a hit as a backpack AI accessory that could be activated by voice, simulating curiosity, shyness, and refusal. I personally liked this product for its minimalist design, featuring only two sensors: one for detecting nearby objects and another for measuring movement. However, after a year of success, it seems lacking in functionality amid the industry’s rapid growth in 2026. Consequently, the loyalty and engagement of users with AI companion products warrant careful consideration.
Let’s explore a few AI companion products that diverge from Mirumi’s design philosophy, focusing on deeper emotional engagement. One of the standout products at CES this year is Fuzozo, a plush robotic pet that emphasizes personality traits, primarily targeting female users. Through a nurturing and social approach, it aims to establish emotional connections with users. The Fuzozo staff explained, “For example, if you pet its head or chin, it provides different feedback, akin to petting a cat.” One of their units seemingly went missing on its way to America or may have wandered off. The staff described it as though it possessed a sense of awareness and agency, which engaged the audience and perhaps contributed to its impressive sales of 50,000 units within three months of launch, reaching 120,000 units in six months, with significant user engagement demonstrated by daily token consumption from thousands of users.
The staff elaborated on the personalities of their toys: “Some are ‘puppy-like’ and always think you’re right, while others are ‘jealous’ and easily get envious. These personalities can be adjusted, and daily interactions will influence them, changing weekly. Every day at 10 AM, they write a little diary you can peek at to see their thoughts.” Observing the booming companion robot market, brands are increasingly designing products with more specific functions and targeting precise demographics. For instance, Luka is aimed at children and teenagers. A Luka staff member noted, “Previously, our products relied on image recognition, which limited our interactions. Today’s large models can think divergently based on text and images, enabling real-time, interruptive interactions.” The star product showcased by Ling Universe is the Luka reading companion robot, demonstrating multilingual storytelling, dialogue, and in-depth picture book analysis, which signifies a transformative reading experience through AI interaction.
Moreover, they introduced the portable AI partner, Xiao Fang Ji, which supports engaging and creative AI multimodal interactions. The Luka staff stated, “We now have two interactive methods: one allows you to take a photo and have the AI generate an interactive version, while the other allows the AI to create an intelligent entity based on your needs and the photo you took.” The trend of developing “specific companionship products for specific demographics” seems to be the current direction of AI hardware in achieving product-market fit (PMF).
At the exhibition, I also came across a seemingly “dull” AI “fake dog,” which turned out to be Jennie, designed to accompany Alzheimer’s patients. Priced at $1,500, it targets individuals caring for those with cognitive impairments, who do not require complex AI interactions but rather companionship and basic response functions. Furthermore, differentiating in design is also a trend in this market; for instance, OLLOBOT, resembling Sid the sloth from the animated movie “Ice Age,” captures attention with its unique appearance. This AI robot can recognize objects throughout the house and sense family emotions, positioning itself closer to a smart home assistant.
OLLOBOT staff explained, “The most important aspect is that as a cyber pet, it provides more emotional value and companionship. For example, its camera can recognize human expressions and approach to offer hugs.” In addition to aiming companionship at humans, some products are designed to engage with other family members. Aura is a robot designed to interact with pets. While at work, I often wish to check in on my cats and dogs; this robot has an automatic patrol mode that helps locate them and activates a laser to entertain them, ensuring they stay happy and healthy.
After reviewing AI companion robots, I feel that with the rapid advancement of AI multimodal models, the enhancements in voice and visual capabilities have significantly improved hardware product interactions. This year’s AI companion products are evidently superior in performance compared to last year, yet they still remain confined to specific small-scale audiences. User engagement is still unverified, and many functions depend on cloud services, raising privacy concerns that remain unresolved. I look forward to seeing further innovative products emerging next year.
3. Automotive Industry and Autonomous Driving: The Battle of Screens, Self-Driving, and Ecosystem Rivalry
In a previous collaboration with Geely, you saw me racing at 170 km/h. Among major automotive manufacturers, the comprehensive integration of AI, including autonomous driving, is a prevailing trend. One observation from this exhibition is that manufacturers are eager to incorporate screens throughout the vehicle’s interior, aiming for 360-degree coverage. Taiwanese component manufacturer HCMF even showcased the potential for more screens in the future.
HCMF staff explained, “We embed Micro-LEDs into glass, turning the windows into touchscreens connected to your phone. This technology could be installed in taxis, allowing passengers to check if the driver is following the intended route. It can also incorporate gaming, enabling children in the backseat to play, or connect with the front electronic rearview mirror to check for obstacles when opening doors.” When asked about the lack of widespread adoption of this technology in standard vehicles, HCMF pointed to regulatory issues and cost as significant barriers. They noted, “Regulations currently require that rear areas remain unobstructed to prevent glare that could impair driving.” Moreover, the cost of such technology is still prohibitive, as it involves embedding it into curved glass, requiring further advancements before it can be installed in vehicles.
As display technology evolves and costs decrease, we can expect a growing number of screens in vehicles, even on windows. However, I question whether this is beneficial, as enjoying the view through car windows is a romantic experience for humans, yet screens are increasingly encroaching on our time spent appreciating the natural environment. I also noticed companies that focus solely on enhancing in-car gaming experiences, such as the Swiss gaming platform AirConsole. They provide a solution that allows users to use smartphones as controllers, collaborating with brands like BMW, Volkswagen, and Audi to offer a streamlined in-car multiplayer gaming experience.
AirConsole staff stated, “Our gaming solutions are already deployed in models from Audi, Porsche, BMW, Volkswagen, and Škoda. The automotive market is vast, and excelling in in-car gaming experiences alone is enough to support a company.” Looking ahead, vehicles are becoming pivotal smart spaces, presenting numerous unexplored products and experiential potentials. Meanwhile, larger players, like chip and software development company Synopsys, are ramping up efforts to develop comprehensive automotive supply chain solutions for next-gen infotainment systems, advanced driving assistance systems, and connectivity for autonomous driving technologies. They recently completed a $35 billion acquisition of engineering simulation software and technology company Ansys.
Undoubtedly, autonomous driving is a hot topic this year. Waymo showcased a massive booth at CES, revealing their current operating model and publicly introducing their next-generation self-driving taxi, Ojai. This marks the transition of autonomous driving from experimental to urban scalability, with optimizations in vehicle appearance, passenger space, and sensor costs. They plan to initiate broader passenger services in 2026, aiming to expand to more cities and even overseas markets, indicating an ongoing rivalry with Tesla. At the exhibition, we also saw Tensor, known for their “L4 autonomous private vehicle,” equipped with an unprecedented number of sensors. Their latest model features a foldable steering wheel for traditional driving and can be stored away during autonomous operations, with production expected to commence in 2026.
This year’s significant autonomous driving announcement came from NVIDIA. During a keynote speech, CEO Jensen Huang announced that NVIDIA’s first full-stack autonomous vehicle will commence testing on American roads in the first quarter of 2026, with plans to test L4-level robotaxi services in collaboration with partners in 2027. While exploring NVIDIA’s exhibit, we encountered the sleek smart vehicle TRINITY, designed for urban mobility. This intelligent self-balancing three-wheeled vehicle uses DGX Spark as its AI brain for real-time visual language model processing, allowing it to lean around curves, combining agility and speed, achieving 0-100 km/h in just two seconds—more thrilling than many supercars. Beyond driving performance, TRINITY serves as a clever AI assistant, managing itineraries, communications, and cloud tasks based on the driver’s preferences. While it appears impressive, it remains more conceptual, and we await its practical realization.
Additionally, I discovered a charming little device from NVIDIA—an open-source robot priced at $299, capable of utilizing cutting-edge open-source models for voice recognition, visual perception, and personalized interactions. We also observed a trend in delivery robots, a rapidly emerging sector. At CES, NVIDIA showcased its investment in Serve Robotics, applying AI and automation technology to “last-mile” delivery, particularly through its partnership with Uber Eats, using autonomous sidewalk robots for restaurant deliveries, aiming to revolutionize food logistics and introduce new competition for delivery personnel. Serve Robotics staff stated, “This is an autonomous delivery robot, classified as L4 autonomous driving, using lidar, cameras, and AI to autonomously navigate streets and sidewalks, specifically designed for ‘door-to-door’ delivery. Currently, we operate in Miami, Dallas, Atlanta, Chicago, Fort Lauderdale, and Alexandria.” When asked about the number of operational units on the streets, Serve Robotics staff shared, “Last December, we reached a milestone of 2,000 robots operating simultaneously on the streets, which is highly significant for our company.” In the U.S., where labor costs are high, such autonomous delivery robots address the “last-mile” challenge for delivery personnel.
In contrast, similar technology and products are rapidly advancing in China, addressing the “last 100 meters” issue with small delivery robots, such as those from New Stone Technology, which targets markets like the Middle East where rider availability is sparse. They aim to fill the gap for consumers who may find it challenging to collect their orders. Their existing fleet includes three-wheeled, six-wheeled, and twelve-wheeled models for city-wide coverage. Additionally, we observed that agricultural robots are a sector with quicker paths to market deployment due to less stringent safety regulations compared to urban autonomous vehicles. For instance, John Deere unveiled an AI tractor at CES designed for large farms, equipped with 16 independent cameras for 360-degree field observation.
While Tesla did not have a booth at CES, its presence was still felt strongly. Outside the venue, The Boring Company, another Musk enterprise, offered free Tesla rides to attendees, transporting them between various exhibition halls. Reports suggest that next year, a tunnel connecting the convention center directly to the airport will be completed, enhancing convenience. Although this tunnel currently relies on human drivers for Tesla vehicles, we look forward to seeing whether full self-driving (FSD) will be utilized for passenger transport at CES next year. In the automotive exhibition area, another critical focus is on Lidar technology, with companies like Hesai and RoboSense showcasing their advancements. With the explosion of the robotics sector, orders for lidar technology have surged, with Hesai announcing plans to double its annual production capacity. Beyond autonomous vehicles and agricultural machinery, robotics is becoming a new growth area and battlefield for lidar companies.
4. Lifestyle: The Practical and the Hype of AI-Enabled Products
Here, we find both groundbreaking product ideas and some that seem rather superfluous. Let’s explore how AI is transforming our lifestyles. First up is the AI mattress from Mu Si, featuring the new T11+ model. It employs “AI tidal algorithm 2.0,” integrating user motion data, sleep posture modeling, and heart rate monitoring to achieve real-time awareness and proactive adjustment of nighttime sleep conditions. By simply lying down for less than a minute, the sensors within the mattress can detect breathing, heart rate, and subtle movements, dynamically adjusting the mattress’s flexible elements. The Mu Si staff explained, “It will adjust your hips and legs to extend deep sleep duration, potentially increasing a person’s deep sleep time from around 20% to about 25%.” Additionally, they introduced a mattress designed to mitigate snoring, a common issue for many households. “This mattress contains a set of microphones that detect snoring, and upon detection, it elevates about 15 degrees to ensure clearer airways, thus reducing snoring.” While I find this product quite practical, many people are concerned about bedroom privacy, especially with sensors and microphones involved. Therefore, the safety of AI in bedroom and bedding products, alongside strong cloud models, will face regulatory and user trust challenges.
Next up is a fun electronic product from Zhuimi, a lamp combined with a hairdryer that attracted significant attention. This product addresses the common pain point of freeing up hands, allowing users to dry their hair while sitting and scrolling on their phones. I also appreciate the lamp’s design, and if released, it may become a popular item among women. Zhuimi also showcased a series of product updates integrating AI, including an AI hairdryer. Staff explained, “When switched to AI mode, the hairdryer adjusts wind speed based on proximity. When close, both wind temperature and speed decrease, while distance increases both. The scalp detects temperature, but the hair does not, eliminating the risk of overheating.” They also unveiled an AI washing machine. “Using over 20,000 data points from our lab, the washing machine can match the most accurate program for your washing needs, ensuring effective yet energy-efficient cleaning. It can identify fabric types and colors, linking the washing instructions with smart capabilities, all connected through WiFi to our Zhuimi app.” They even demonstrated a sweeping robot that can climb stairs like a tank.
Speaking of sweeping robots, Mova showcased a flying sweeping robot capable of reaching the second floor. Watching the demonstrations raised questions about the necessity of developing stair-climbing robots, given the relatively low demand for such features in the market. It appears more like a display of technological prowess than a practical innovation. Richard Xu, North America Chief Representative of Daguan Capital, commented, “We can find many intriguing ideas in various hardware scenarios, almost miraculous, but when it comes to everyday practicality, one has to question whether these features are genuinely needed. If we approach this from a market perspective, a niche need will unlikely generate significant revenue or market share.” He noted that large companies often pursue such innovations to showcase their capabilities.
Interestingly, we saw advancements in robotic lawnmowers this year as well. The Navimow X3 intelligent lawnmower has significantly improved coverage area and charging times, now able to cover a 10,000 square meter garden. Due to technological advancements and decreased costs, integrating various technology routes has become feasible, allowing products to handle diverse scenarios more adeptly. Navimow staff explained, “Radar and vision technology work well for medium to small yards, while RTK (satellite positioning technology) is less stable in narrow and complex areas, but excels in open spaces. This robot employs a dual integration of RTK and vision technology to effectively cover larger lawn areas, as no single technology can address every scenario.” Additionally, Woan unveiled the world’s first AI tennis robot, Acemate, which autonomously moves to complete returns using AI visual recognition, high dynamic interaction, and real-time decision-making, providing training support for national athletes.
Another intriguing AI product that caught my attention was the AI refrigerator, although I still consider it somewhat conceptual. Qualcomm demonstrated a fridge capable of identifying items inside, tracking purchase dates, and detecting expired food, even generating dinner recipes based on remaining ingredients. In the future, it could connect to the vehicle system, allowing the car to autonomously drive to the supermarket for grocery shopping and return. The thought is quite exciting. Qualcomm’s motivation behind all this is its commitment to low-power chip demand. The staff explained, “Our recommended recipes are personalized based on your health status and preferences.” As Qualcomm traditionally focuses on mobile technology, its chips are designed for low power consumption while maintaining robust AI capabilities.
In the smart wearable and exercise sector, the consumer-grade exoskeleton product Hypershell X Ultra utilizes self-developed AI algorithms to reduce the physical burden by about 30 kilograms while climbing stairs or hiking. Additionally, the AI bird feeder remains a textbook example of product-market fit (PMF). This market is intriguing and completely outside my expertise, as I am not the target audience, but data indicates that the global wild bird product market exceeds $7 billion, with the U.S. being one of the most lucrative markets for bird-related products. For instance, Birdfy, a Chinese company, achieved monthly sales of over a million dollars with its smart bird feeder. At CES, Birdfy launched two new products, including the Birdfy Feeder Vista, equipped with dual cameras capable of capturing 14 million pixel panoramic images and recording 6K videos, with built-in AI to identify visiting bird species.
5. AI Wearables: The Race for Glasses, Rings, and Interaction Forms
Finally, let’s discuss several AI wearable products. This field is a keen interest for investors, as they hope to discover the next AI-native hardware form that might resemble the next “iPhone moment.” The smart glasses sector has gained tremendous traction in recent years, with competition intensifying. However, this year, there haven’t been many groundbreaking technological updates. Meta offers two different approaches: one focuses on AI photography glasses, while the other emphasizes AR glasses powered by AI functionalities. The Ray-Ban Meta emphasizes high-definition first-person photography and AI multimodal understanding, priced at $299, while Ray-Ban Display prioritizes information display and immersive interactions, priced at $799.
This year, Xreal’s flagship product is the Xreal 1S, positioned as an entry-level personal cinema glasses, along with the ASUS ROG XREAL R1, designed for top-tier gamers, featuring dual 240Hz micro-OLED displays. These products have seen upgrades in display quality, immersive experiences, and pricing. The Rokid glasses line includes AI glasses using diffraction waveguide technology for single green display and AR glasses with Micro-OLED screens for full-color display. The single green option is currently the most stable, and we can see a notable improvement in product capabilities this year, including teleprompter, real-time translation, navigation, and AI Q&A functions, indicating genuine utility and market potential.
Rokid staff explained, “When used as a teleprompter, it automatically follows your speech rate.” Enhancements in core functions include translation, teleprompter features, and photography, with updates for navigation and payment functionalities. They have also introduced the industry’s first intelligent store, allowing various AI experts to interact and provide services. Many consider glasses to be the ultimate form of AI hardware interaction, but there are alternative viewpoints. Some argue that many people don’t wear glasses regularly, making it a flawed concept to adopt glasses solely for AI functionalities. Meanwhile, AI rings and other forms are being explored as potential breakout hardware.
Peter Pan, Founding Partner of Hat-Trick Capital, remarked, “Every year I come to CES, I seek out what could be considered AI-native devices. Behind this concept, people are looking for devices that can support the next generation of computing platforms. Last year, glasses were quite popular, and this year’s progress is a continuation of that. However, we are also watching for new AI hardware forms that can develop. Rings are clearly an alternative form, and we see many products resembling rings this year, capable of sensing bodily data and carrying out more functions. While glasses will continue to evolve, their improvement will also depend on the development of edge AI, which currently lacks outstanding performance.” Regarding AI rings, many startups are exploring this direction, believing they offer a more natural wearing experience compared to glasses and are easier for users to try and adopt for specific applications. For example, Vocci is an AI ring focused on audio recording and note-taking. JY Jia of Gyges Labs stated, “We believe a better approach is to embed intelligence at your fingertips. During important meetings, you can double-tap it to start recording, and a single tap can flag key points. Additionally, we offer a capture mode that lets you issue commands directly to the AI, like asking it to remember a phone number or summarize your experiences at CES.”
Lastly, I encountered some fun and innovative products, such as a self-driving wheelchair from Strutt, which aims to convey that autonomous driving technology should serve not just high-end cars but also provide mobility for those with limited movement. The company strives to compact and integrate complex autonomous driving systems into personal mobility devices. Additionally, AI gaming hardware is becoming increasingly common, including international chess games suitable for solo play. Another entertaining product is the stringless guitar from Mova, designed for easy playability while preserving control. It features a fully automated intelligent accompaniment function that seamlessly syncs drum beats and chords with the user’s strumming rhythm, offering real-time support and visual guidance for beginners.
On the flip side, I encountered some questionable products, such as a body dryer that seems unnecessary, as a towel would suffice. Trying out such an absurd product led to some embarrassing moments when I was recognized. Also, a Japanese brand, Yukai Engineering, showcased a portable fan called Baby FuFu, designed to clip onto baby strollers, helping infants stay cool during hot summer days. While it is undeniably cute, it lacks substantial technological advancements. The mere design could attract consumers. Lastly, there is an AI picture frame that uses a built-in microphone and GPT multimodal capabilities to generate images on demand, utilizing a colored electronic ink screen with a five-year battery life, eliminating the need for frequent recharging.
6. The AI Product Explosion: Yet, the “iPhone Moment” Remains Distant
In conclusion, we have explored 50 AI or related technology projects at CES, and I believe the key takeaway is that while AI products are improving due to advancements in both large models and edge models, we have yet to experience an “iPhone Moment” for AI hardware. As Richard Xu from Daguan Capital stated, “From an innovation perspective, getting people to accept a new form of AI hardware is incredibly challenging. I joked with a friend that the potential for AI-native hardware could be found in earphones or even dentures, as glasses may not appeal to everyone.” Of course, many may view this as an extreme opinion, but I only refer to AI-native hardware. While glasses might be difficult to integrate, they still present a rational product in terms of AI and smart hardware.
Just as the person who invented the wheel made a significant contribution, we are at a foundational innovation level. In creating a new form of AI-native product innovation, the difficulty is comparable to the invention of the wheel. However, making innovations on existing AI smartphones and products is entirely reasonable. With the enhanced capabilities of large AI models, as various industries embrace AI, product effectiveness itself is improving, which has been my most significant takeaway from this year’s CES. I also sense the urgency among investors to discover the next “iPhone Moment,” akin to a product that could fundamentally transform interaction paradigms and initiate a more expansive development cycle than the mobile internet. However, no company has yet produced such a revolutionary product, although it may exist in an unobserved corner of the media landscape. One thing is certain: the AI wave has introduced numerous new ways to engage with hardware products. Some are flashy, while others are genuinely useful, gradually making their way into homes everywhere. I eagerly anticipate what interesting new products will emerge at CES 2027, following the advancements in multimodal models, edge models, and Physical AI in 2026.
Original article by NenPower, If reposted, please credit the source: https://nenpower.com/blog/navigating-the-ai-landscape-at-ces-2026-insights-from-50-innovative-projects-and-industry-trends/
