CES 2026 officially opened on January 6 in Las Vegas, and within hours it was obvious that the conversation had changed.
Artificial intelligence was everywhere—but not as a standalone product anymore. Instead, AI showed up as infrastructure. It powered robots, vehicles, energy systems, home devices, wearables, and tools that actually move, see, and act in the real world.
This year, CES wasn’t about apps or cloud features. It was about physical systems—machines designed to operate outside screens, with AI handling perception, decision-making, and control behind the scenes.
And nowhere was that shift clearer than on the show floor itself.
Robots Took Over the Show Floor
From the moment the doors opened, live robotics demos were running continuously. These weren’t short, scripted routines designed for applause. Humanoid robots walked, turned, adjusted posture mid-movement, and recovered balance in real time.
Several booths focused less on polished products and more on raw capability—showing how far control systems, sensing, and motion planning have come.
Unitree’s demo area highlighted full-body coordination. Robots transitioned smoothly between walking, rotating, and upper-body movement without pausing between actions. When nudged or destabilized, they corrected themselves instantly, a sign that perception and control were running continuously rather than step by step.
Nearby, Sharpa showcased a robotic hand mounted on a humanoid body for positioning. The hand performed precise manipulation tasks—adjusting grip force, finger placement, and orientation while handling small objects. Sharpa confirmed the hand is already shipping to universities, signaling that high-precision manipulation has moved beyond experimental lab setups.
LG’s Home Robot Signals a Different Priority
LG took the stage with CLOi, its autonomous home robot, framing it around what the company calls physical AI.
CLOi uses a wheeled base for stability, a tilting torso, and two seven-degree-of-freedom arms ending in five-fingered hands. Its head functions as a mobile AI hub, packed with cameras, sensors, speakers, a display, and voice-based generative AI.
LG demonstrated the robot folding laundry and handling household items slowly and deliberately. That pace mattered. This wasn’t a flashy stunt—it felt like a signal that LG is prioritizing reliability and long-term deployment, training models on real household data instead of chasing applause.
A line is forming between companies building demos and companies building machines meant to live in homes.
A Robot Vacuum… With Legs
One of the most unexpected moments came from Roborock.
The company revealed a prototype vacuum equipped with hinged, wheeled legs capable of climbing stairs, balancing itself, and cleaning steps while moving. In the demo, the robot took about 30–40 seconds to climb five large steps—but it cleaned them as it went, something previous stair-capable designs couldn’t do.
There’s no pricing or release date yet, but expectations are already forming, especially after Roborock’s Saros Z70 robotic-arm vacuum launched at $2,599. This felt less like a gimmick and more like a glimpse at how home automation keeps expanding one practical problem at a time.
Companion Robots Are Going Mainstream
Emotional and companion-style robots were no longer fringe curiosities.
SwitchBot showcased its Kata Friends robot pets—small, wheeled companions that recognize faces, respond to gestures, and interpret basic emotional cues. Launching in Japan at around ¥10,000 (roughly $64), the price point matters. This isn’t a luxury experiment—it’s an early signal that robotic companionship is heading toward mass-market territory.
That same tension appeared across the growing category of desk-based AI companions.
Razer introduced a physical version of Project Ava, its gaming co-pilot, embodied as a cylindrical desktop device with an animated character named Kira that watches gameplay and offers real-time advice.
Lepro Amy took a more intimate approach, presenting an AI “soulmate” device with a curved OLED display designed to simulate emotional connection. These weren’t framed as jokes or novelties—they were positioned as legitimate product categories.
Flexible Screens and Foldables Step Forward
AI wasn’t limited to robotics.
Lenovo and Motorola made a major statement—literally inside the Las Vegas Sphere.
Lenovo showcased rollable laptop concepts that expand from standard displays into ultrawide screens. The ThinkPad Roll XD even wraps its OLED panel around the lid to create a second outward-facing display. These are still concepts, but the engineering feels far enough along to suggest Lenovo is testing real use cases, not just spectacle.
Motorola debuted its Razr Fold, stepping into book-style foldables with a 6.6-inch external screen, an 8.1-inch internal display, and a 50-megapixel camera system. Pricing and full specs are coming later this year, but the form factor alone puts Motorola back into serious competition with Samsung and Google.
AI Becomes Sports Infrastructure
One of the quieter but more telling announcements came from Lenovo and FIFA.
The two announced AI-powered 3D digital avatars for refereeing and broadcast use during the 2026 FIFA World Cup. Using generative AI and detailed player models, Lenovo is creating accurate digital replicas for off-site replays and officiating visuals.
It wasn’t flashy—but it showed how AI visualization is becoming infrastructure, not entertainment.
Chips Set the Pace
On the silicon side, Intel officially launched Core Ultra Series 3 (Panther Lake), built on its new 18A process. Intel claims up to 27 hours of laptop battery life, alongside an integrated GPU delivering near-discrete performance.
Preorders opened immediately, with shipping scheduled for January 27. Intel also confirmed plans for a Panther Lake-based handheld gaming platform, including a dedicated Core G3 processor—clearly aiming at a market AMD has dominated.
AMD, meanwhile, leaned hard into AI scale. CEO Lisa Su predicted AI usage will grow from 1 billion to 5 billion users within five years. Ryzen AI 400 processors and server chips were positioned as foundational layers across devices and data centers.
NVIDIA took the sharpest stance of all. There were no new consumer GPUs. Jensen Huang focused instead on AI infrastructure, robotics, autonomous vehicles, and data centers, repeatedly emphasizing “physical AI” as the company’s future.
AI Meets Energy, Kitchens, and Daily Life
CES 2026 also connected AI to long-term energy bets. Commonwealth Fusion Systems announced work with NVIDIA and Siemens to build a digital twin of its SPARC fusion reactor—using AI to design and operate systems that don’t yet exist commercially.
In the home, Nymble introduced an AI-powered robot chef that cooks single-pot meals from over 500 recipes, adjusting for dietary preferences. Launching on Kickstarter in February, it felt less like a gadget and more like incremental kitchen automation.
Elsewhere, ultra-low-power e-ink art frames, kid-friendly communication devices, color-changing press-on nails, and gaming hardware updates all pointed in the same direction: AI embedded quietly, not loudly.
The Pattern Was Impossible to Miss
By the end of day one, the message of CES 2026 was clear.
AI is no longer abstract. Robots aren’t theoretical. Homes, factories, kitchens, vehicles, energy systems, and even sports officiating are already being wired together.
CES 2026 didn’t ease into the future.