This is a valid RSS feed.
This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.
... rg/feeds/topic/robotics.rss" rel="self"></atom:link><language>en-us</lan ...
^
line 2, column 0: (445 occurrences) [help]
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="h ...
line 2, column 0: (230 occurrences) [help]
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="h ...
line 2, column 0: (216 occurrences) [help]
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="h ...
line 5, column 0: (32 occurrences) [help]
<img src="https://spectrum.ieee.org/media-library/heilind-molex-logo-bold-bl ...
line 5, column 0: (14 occurrences) [help]
<img src="https://spectrum.ieee.org/media-library/heilind-molex-logo-bold-bl ...
line 5, column 0: (14 occurrences) [help]
<img src="https://spectrum.ieee.org/media-library/heilind-molex-logo-bold-bl ...
line 5, column 0: (8 occurrences) [help]
<img src="https://spectrum.ieee.org/media-library/heilind-molex-logo-bold-bl ...
line 6, column 21382: (12 occurrences) [help]
... <p><a href="#top">Back to top</a></p>]]></description><pubDate>Sun, 07 S ...
^
line 6, column 0: (49 occurrences) [help]
<img src="https://spectrum.ieee.org/media-library/boston-dynamics-robot-oper ...
<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/topic/robotics.rss" rel="self"></atom:link><language>en-us</language><lastBuildDate>Wed, 17 Sep 2025 20:05:55 -0000</lastBuildDate><item><title>Video Friday: A Soft Robot Companion</title><link>https://spectrum.ieee.org/video-friday-soft-robot-companion</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-person-gently-touches-a-humanoid-robot-s-head-in-a-brightly-lit-room-with-colorful-posters.png?id=61594149&width=1245&height=700&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="bkzxjzlekru"><em>Fourier’s first Care-bot GR-3. This full-size “care bot” is designed as an interactive companion. Its soft-touch outer shell and multimodal emotional interaction system bring the concept of “warm tech companionship” to life.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="768d0fc535e35c9f7aac70048df886c2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/bkzXjzlEKrU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I like that it’s <a href="https://spectrum.ieee.org/blossom-a-creative-handmade-approach-to-social-robotics-from-cornell-and-google" target="_blank">soft to the touch</a>, although I’m not sure that encouraging touch is safe. Reminds me a little bit of <a href="https://spectrum.ieee.org/nasa-jsc-unveils-valkyrie-drc-robot" target="_blank">Valkyrie</a>, where NASA put a lot of thought into the soft aspects of the robot.</p><p>[ <a href="https://www.fftai.com/products-gr3">Fourier</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="mweqy6dfzjm">TAKE MY MONEY</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a3250b7c6ca10e8036e064391036c260" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/mwEqY6DFzjM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>This 112-gram micro air vehicle (MAV) features foldable propeller arms that can lock into a compact rectangular profile comparable to the size of a smartphone. The vehicle can be launched by simply throwing it in the air, at which point the arms will unfold and autonomously stabilize to a hovering state. Multiple flight tests demonstrated the capability of the feedback controller to stabilize the MAV based on different initial conditions, including tumbling rates of up to 2,500 degrees per second.</em></blockquote><p>[ <a href="https://avfl.engr.tamu.edu/">AVFL</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="aluhg2wm2ca"><em>The U.S. Naval Research Laboratory (NRL), in collaboration with NASA, is advancing space robotics by deploying reinforcement-learning algorithms onto <a href="https://robotsguide.com/robots/astrobee" target="_blank">Astrobee</a>, a free-flying robotic assistant on board the International Space Station. This video highlights how NRL researchers are leveraging artificial intelligence to enable robots to learn, adapt, and perform tasks autonomously. By integrating reinforcement learning, Astrobee can improve maneuverability and optimize energy use.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="35143f03ef586b1486f8a7fd72880ed0" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ALUhG2Wm2CA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.nrl.navy.mil/Media/News/Article/4297593/reinforcement-learning-is-making-a-buzz-in-space/">NRL</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="sbusoape43k">Every day I’m scuttlin.’</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d84bff9db01e5eff19f4e10994e34396" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/sbUSOAPe43k?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://groundcontrolrobotics.com/">Ground Control Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="uxhdhj1adr4"><em>Trust is built. Every part of our robot Proxie—from wheels to eyes—is designed with trust in mind. Cobot CEO Brad Porter explains the intent behind its design.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="550db0bcbd986d55bcb2e3d62764e5d4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/uXhDHj1aDr4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.co.bot/our-cobot">Cobot</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="-wp-kg7lcwk">Phase 1: Build lots of small quadruped robots. Phase 2: ? Phase 3: Profit!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="8e80c7c0d0b8fde292d19064bffcc03d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/-WP-Kg7LcWk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.deeprobotics.cn/en">DEEP Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="1zttbdsslu4"><em>LAPP USA partnered with Corvus Robotics to solve a long-standing supply-chain challenge: labor-intensive, error-prone inventory counting.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ac92ed142b260f17a19f35eef9823afc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1ZTtBDsslu4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.corvus-robotics.com/">Corvus</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="tn9zfda4sse">I’m pretty sure that 95 percent of all science consists of moving small amounts of liquid from one container to another.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0471e4f8ecff1ba048b044969d115224" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/tN9zFDa4ssE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.flexiv.com/">Flexiv</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="c2vvjzks6-q"><a href="https://spectrum.ieee.org/tag/raffaello-d-andrea" target="_blank">Raffaello D’Andrea</a>, interviewed at ICRA 2025.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="02705a05e54b1ba3a57f9d0dff0961c4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/c2VVJZkS6-Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.verity.net/">Verity</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="qyc11yl5bxc">Tessa Lau, interviewed at ICRA 2025.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0bb680f058771328a6021c53d2e5fd4b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/qyC11yl5bXc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dustyrobotics.com/">Dusty Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="6zs05ybplty"><em>Ever wanted to look inside the mind behind a cutting-edge humanoid robot? In this special episode, we have Dr. Aaron Zhang, the product manager at LimX Dynamics, for an exclusive deep dive into the LimX Oli.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5f0bb1f8cbb3fd8d7492527f64d577d1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/6zs05YBPlTY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 12 Sep 2025 17:04:10 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-soft-robot-companion</guid><category>Video friday</category><category>Robotics</category><category>Humanoid robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-person-gently-touches-a-humanoid-robot-s-head-in-a-brightly-lit-room-with-colorful-posters.png?id=61594149&width=980"></media:content></item><item><title>Reality Is Ruining the Humanoid Robot Hype</title><link>https://spectrum.ieee.org/humanoid-robot-scaling</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robots-with-shovels-stand-in-a-container-ready-for-deployment-on-a-pallet.jpg?id=61564898&width=1245&height=700&coordinates=100%2C0%2C100%2C0"/><br/><br/><p><strong>Over the next several</strong> years, humanoid robots will change the nature of work. Or at least, that’s what humanoid robotics companies have been consistently promising, enabling them to raise <a href="https://www.linkedin.com/posts/andrakeay_agility-robotics-a-humanoid-robot-maker-activity-7312580510977769472-mifD/" rel="noopener noreferrer" target="_blank">hundreds</a> of <a href="https://apptronik.com/news-collection/apptronik-raises-350-million-in-series-a-funding" rel="noopener noreferrer" target="_blank">millions</a> of <a href="https://finance.yahoo.com/news/figure-ai-shakes-silicon-valley-140122922.html" rel="noopener noreferrer" target="_blank">dollars</a> at valuations that run into the billions.</p><p>Delivering on these promises will require a lot of robots. Agility Robotics expects to ship “<a href="https://www.bloomberg.com/news/videos/2025-03-04/2025-the-year-for-humanoid-robots-agility-robotics-ceo-video" rel="noopener noreferrer" target="_blank">hundreds</a>” of its Digit robots in 2025 and has a factory in Oregon capable of building <a href="https://spectrum.ieee.org/agility-humanoid-robotics-factory" target="_self">over 10,000</a> robots per year. Tesla <a href="https://www.wsj.com/business/autos/musk-tells-tesla-workers-dont-sell-your-shares-98691278" rel="noopener noreferrer" target="_blank">is planning </a>to produce 5,000 of its Optimus robots in 2025, and at least 50,000 in 2026. Figure believes “<a href="https://www.forbes.com/sites/johnkoetsier/2025/01/30/figure-plans-to-ship-100000-humanoid-robots-over-next-4-years/" rel="noopener noreferrer" target="_blank">there is a path to 100,000 robots</a>” by 2029. And these are just three of the largest companies in an increasingly crowded space.</p><div class="ieee-sidebar-small"><p>This article is part of our special report <a href="https://spectrum.ieee.org/special-reports/scale/" target="_blank">The Scale Issue</a>.</p></div><p>Amplifying this message are many financial analysts: <a href="https://institute.bankofamerica.com/content/dam/transformation/humanoid-robots.pdf" rel="noopener noreferrer" target="_blank">Bank of America Global Research</a>, for example, predicts that global humanoid robot shipments will reach 18,000 units in 2025. And <a href="https://www.morganstanley.com/insights/articles/humanoid-robot-market-5-trillion-by-2050" rel="noopener noreferrer" target="_blank">Morgan Stanley Research estimates</a> that by 2050 there could be over 1 billion humanoid robots, part of a US $5 trillion market.</p><p>But as of now, the market for humanoid robots is almost entirely hypothetical. Even the most successful companies in this space have deployed only a small handful of robots in carefully controlled pilot projects. And future projections seem to be based on an extraordinarily broad interpretation of jobs that a capable, efficient, and safe humanoid robot—which does not currently exist—might conceivably be able to do. Can the current reality connect with the promised scale?</p><h2>What Will It Take to Scale Humanoid Robots?</h2><p>Physically building tens of thousands, or even hundreds of thousands, of humanoid robots is certainly possible in the near term. In 2023, <a href="https://ifr.org/ifr-press-releases/news/record-of-4-million-robots-working-in-factories-worldwide" rel="noopener noreferrer" target="_blank">on the order of 500,000 industrial robots were installed worldwide</a>. Under the basic assumption that a humanoid robot is approximately equivalent to four industrial arms in terms of components, existing supply chains should be able to support even the most optimistic near-term projections for humanoid manufacturing.</p><p>But simply building the robots is arguably the easiest part of scaling humanoids, says <a href="https://www.linkedin.com/in/meloneewise/" rel="noopener noreferrer" target="_blank">Melonee Wise</a>, who served as chief product officer at Agility Robotics until this month. “The bigger problem is demand—I don’t think anyone has found an application for humanoids that would require several thousand robots per facility.” Large deployments, Wise explains, are the most realistic way for a robotics company to scale its business, since onboarding any new client can take weeks or months. An alternative approach to deploying several thousand robots to do a single job is to deploy several hundred robots that can each do 10 jobs, which seems to be what most of the humanoid industry is betting on in the medium to long term.</p><p>While there’s a belief across much of the humanoid robotics industry that rapid progress in AI must somehow translate into rapid progress toward multipurpose robots, it’s <a href="https://spectrum.ieee.org/solve-robotics" target="_self">not</a> <a href="https://www.youtube.com/watch?v=PfvctjoMPk8" rel="noopener noreferrer" target="_blank">clear</a> how, when, or if that will happen. “I think what a lot of people are hoping for is they’re going to AI their way out of this,” says Wise. “But the reality of the situation is that currently AI is not robust enough to meet the requirements of the market.”</p><h2>Bringing Humanoid Robots to Market</h2><p><span>Market requirements for humanoid robots include a slew of extremely dull, extremely critical things like battery life, reliability, and safety. Of these, battery life is the most straightforward—for a robot to usefully do a job, it can’t spend most of its time charging. The next version of Agility’s Digit robot, which can handle payloads of up to 16 kilograms, includes a bulky “backpack” containing a battery with a charging ratio of 10 to 1: The robot can run for 90 minutes, and fully recharge in 9 minutes. Slimmer humanoid robots from other companies must necessarily be making compromises to maintain their svelte form factors.</span></p><p>In operation, Digit will probably spend a few minutes charging after running for 30 minutes. That’s because 60 minutes of Digit’s runtime is essentially a reserve in case something happens in its workspace that requires it to temporarily pause, a not-infrequent occurrence in the logistics and manufacturing environments that Agility is targeting. Without a 60-minute reserve, the robot would be much more likely to run out of power mid-task and need to be manually recharged. Consider what that might look like with even a modest deployment of several hundred robots weighing over a hundred kilograms each. “No one wants to deal with that,” comments Wise.</p><p>Potential customers for humanoid robots are very concerned with downtime. Over the course of a month, a factory operating at 99 percent reliability will see approximately 5 hours of downtime. Wise says that any downtime that stops something like a production line can cost tens of thousands of dollars per minute, which is why many industrial customers expect a couple more 9s of reliability: 99.99 percent. Wise says that Agility has demonstrated this level of reliability in some specific applications, but not in the context of multipurpose or general-purpose functionality.</p><h2>Humanoid Robot Safety</h2><p><span>A humanoid robot in an industrial environment must meet general </span><a href="https://blog.ansi.org/ansi/ansi-b11-standards-safety-of-machinery/" target="_blank">safety</a><span> </span><a href="https://osha.europa.eu/en/legislation/directives/directive-2006-42-ec-of-the-european-parliament-and-of-the-council" target="_blank">requirements</a><span> for industrial machines. In the past, robotic systems like autonomous vehicles and drones have benefited from immature regulatory environments to scale quickly. But Wise says that approach can’t work for humanoids, because the industry is already heavily regulated—the robot is simply considered another piece of machinery.</span></p><p>There are also more specific<a href="https://webstore.ansi.org/standards/ria/ansiriar15082020" target="_blank"> safety standards</a> currently under development for humanoid robots, explains Matt Powers, associate director of autonomy R&D at Boston Dynamics. He notes that his company is helping develop an <a href="https://www.iso.org/standard/91469.html" target="_blank">International Organization for Standardization (ISO) safety standard for dynamically balancing legged robots</a>. “We’re very happy that the top players in the field, like Agility and Figure, are joining us in developing a way to explain why we believe that the systems that we’re deploying are safe,” Powers says.</p><p>These standards are necessary because the traditional safety approach of cutting power may not be a good option for a dynamically balancing system. Doing so will cause a humanoid robot to fall over, potentially making the situation even worse. There is no simple solution to this problem, and the initial approach that Boston Dynamics expects to take with its Atlas robot is to keep the robot out of situations where simply powering it off might not be the best option. “We’re going to start with relatively low-risk deployments, and then expand as we build confidence in our safety systems,” Powers says. “I think a methodical approach is really going to be the winner here.”</p><p>In practice, low risk means keeping humanoid robots away from people. But humanoids that are restricted by what jobs they can safely do and where they can safely move are going to have more trouble finding tasks that provide value.</p><h2>Are Humanoids the Answer?</h2><p><span>The issues of demand, battery life, reliability, and safety all need to be solved before humanoid robots can scale. But a more fundamental question to ask is whether a bipedal robot is actually worth the trouble.</span></p><p>Dynamic balancing with legs would theoretically enable these robots to navigate complex environments like a human. Yet demo videos show these humanoid robots as either mostly stationary or repetitively moving short distances over flat floors. The promise is that what we’re seeing now is just the first step toward humanlike mobility. But in the short to medium term, there are much more reliable, efficient, and cost-effective platforms that can take over in these situations: robots with arms, but with wheels instead of legs.</p><p>Safe and reliable humanoid robots have the potential to revolutionize the labor market at some point in the future. But potential is just that, and despite the humanoid enthusiasm, we have to be realistic about what it will take to turn potential into reality. <span class="ieee-end-mark"></span></p><p><em>This article appears in the October 2025 print issue as “Why Humanoid Robots Aren’t Scaling.”</em></p>]]></description><pubDate>Thu, 11 Sep 2025 13:00:00 +0000</pubDate><guid>https://spectrum.ieee.org/humanoid-robot-scaling</guid><category>Humanoid robots</category><category>Scale issue</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/robots-with-shovels-stand-in-a-container-ready-for-deployment-on-a-pallet.jpg?id=61564898&width=980"></media:content></item><item><title>How Robotics Is Powering the Future of Innovation</title><link>https://content.knowledgehub.wiley.com/from-concept-to-reality-how-robotics-is-transforming-our-world/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/heilind-molex-logo-bold-blue-and-red-text-with-a-yellow-line-under-heilind.png?id=61582070&width=980"/><br/><br/><p>The future of robotics is being shaped by powerful technologies like AI, edge computing, and high-speed connectivity, driving smarter, more responsive machines across industries. Robots are no longer confined to static environments—they are evolving to interact dynamically with humans and their surroundings.</p><p>This eBook explores the impact of robotics in diverse fields, from home automation and medical technology to automotive, data centers, and industrial applications. It highlights challenges like power efficiency, miniaturization, and ruggedization, while showcasing Molex’s innovative solutions tailored for each domain.</p><p>Additionally, the eBook covers:</p><ul><li>Ruggedized connectors for harsh industrial settings</li><li>Advanced power management for home robots</li><li>Miniaturized systems for precision medical robotics</li><li>5G/6G-enabled autonomous vehicles</li><li>High-speed data solutions for cloud infrastructure</li></ul><div><span><a href="https://content.knowledgehub.wiley.com/from-concept-to-reality-how-robotics-is-transforming-our-world/" target="_blank">Download this free whitepaper now!</a></span></div>]]></description><pubDate>Thu, 11 Sep 2025 10:00:02 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/from-concept-to-reality-how-robotics-is-transforming-our-world/</guid><category>Type:whitepaper</category><category>Innovation</category><category>Robotics</category><category>Edge computing</category><category>Artificial intelligence</category><dc:creator>Heilind Electronics</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/61582070/origin.png"></media:content></item><item><title>Large Behavior Models Are Helping Atlas Get to Work</title><link>https://spectrum.ieee.org/boston-dynamics-atlas-scott-kuindersma</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/boston-dynamics-robot-operating-autonomously-near-stacked-robotic-components-on-a-shelf.png?id=61540463&width=1245&height=700&coordinates=0%2C122%2C0%2C122"/><br/><br/><p><a href="https://spectrum.ieee.org/tag/boston-dynamics" target="_blank">Boston Dynamics</a> can be forgiven, I think, for the relative lack of acrobatic prowess displayed by the <a href="https://spectrum.ieee.org/atlas-humanoid-robot" target="_blank">new version of Atlas</a> in (<a href="https://www.youtube.com/watch?v=I44_zbEwz_w" rel="noopener noreferrer" target="_blank">most of</a>) its latest videos. In fact, if you look at <a href="https://www.youtube.com/watch?v=F_7IPm7f1vI" rel="noopener noreferrer" target="_blank">this Atlas video</a> from late last year, and compare it to <a href="https://www.youtube.com/watch?v=HYwekersccY" rel="noopener noreferrer" target="_blank">Atlas’ most recent video</a>, it’s doing what looks to be more or less the same logistics-y stuff—all of which is far less visually exciting than backflips. </p><p>But I would argue that the relatively dull tasks Atlas is working on now, moving car parts and totes and whatnot, are just as impressive. Making a humanoid that can consistently and economically and safely do useful things over the long term could very well be the hardest problem in robotics right now, and Boston Dynamics is taking it seriously. </p><p>Last October, <a href="https://spectrum.ieee.org/boston-dynamics-toyota-research" target="_self">Boston Dynamics announced a partnership with Toyota Research Institute</a> with the goal of general-purpose-izing Atlas. We’re now starting to see the results of that partnership, and Boston Dynamics’ vice president of robotics research, <a href="https://www.linkedin.com/in/scott-kuindersma-06a38152/" target="_blank">Scott Kuindersma</a>, takes us through the progress they’ve made.</p><h2>Building AI Generalist Robots</h2><p>While the context of this work is “building AI generalist robots,” I’m not sure that anyone really knows what a “generalist robot” would actually look like, or even how we’ll even know when someone has achieved it. Humans are generalists, sort of—we can potentially do a lot of things, and we’re fairly adaptable and flexible in many situations, but we still require training for most tasks. I bring this up just to try and contextualize expectations, because I think a successful humanoid robot doesn’t have to actually be a generalist, but instead just has to be capable of doing several different kinds of tasks, and to be adaptable and flexible in the context of those tasks. And that’s already difficult enough.</p><p>The approach that the two companies are taking is to leverage large behavior models (LBMs), which combine more general world knowledge with specific task knowledge to help Atlas with that adaptability and flexibility thing. As Boston Dynamics points out in <a href="https://bostondynamics.com/blog/large-behavior-models-atlas-find-new-footing/" rel="noopener noreferrer" target="_blank">a recent blog post</a>, “the field is steadily accumulating evidence that policies trained on a large corpus of diverse task data can generalize and recover better than specialist policies that are trained to solve one or a small number of tasks.” Essentially, the goal is to develop a foundational policy that covers things like movement and manipulation, and then add more specific training (provided by humans) on top of that for specific tasks. This video below shows how that’s going so far.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="c2bec65dbee6d5bbe319fb259c24c33d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/HYwekersccY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">- YouTube</small> </p><p><span>What the video doesn’t show is the training system that Boston Dynamics uses to teach Atlas to do these tasks. Essentially imitation learning, an operator wearing a motion tracking system teleoperates Atlas through motion and manipulation tasks. There’s a one-to-one mapping between the operator and the robot, making it fairly intuitive, although as anyone who has tried to teleoperate a robot with a surfeit of degrees of freedom can attest to, it takes some practice to do it well. </span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Robot and VR user interact in a lab workspace." class="rm-shortcode" data-rm-shortcode-id="fd18a827a5ae55c2f956718f34558ce6" data-rm-shortcode-name="rebelmouse-image" id="d9f54" loading="lazy" src="https://spectrum.ieee.org/media-library/robot-and-vr-user-interact-in-a-lab-workspace.png?id=61540461&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A motion tracking system provides high-quality task training data for Atlas.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Boston Dynamics</small></p><p><span>This interface provides very high-quality demonstration data for Atlas, but it’s not the easiest to scale—just one of the challenges of deploying a </span><span>multipurpose (different than generalist!) humanoid.</span></p><p>For more about what’s going on behind the scenes in this video and Boston Dynamics’ strategy with Atlas, <em>IEEE Spectrum</em> spoke with Kuindersma.</p><p class="rm-anchors" id="top">Scott Kuindersma on:</p><ul><li><a href="#new">What’s new from Boston Dynamics and Toyota Research Institute</a></li><li><a href="#lbm">The role of large behavior models</a></li><li><a href="#unique">Learning through human imitation</a></li><li><a href="#limit">The potential limitations of imitating humans</a></li><li><a href="#data">The importance of high-quality data</a></li><li><a href="#next">The future for Atlas</a></li></ul><p class="rm-anchors" id="new"><strong>In <a href="https://www.youtube.com/watch?v=F_7IPm7f1vI" target="_blank">a video from last October</a> just as your partnership with Toyota Research Institute was beginning, Atlas was shown moving parts around and performing whole-body manipulation. What’s the key difference between that demonstration and what we’re seeing in the new video? </strong></p><p><strong>Scott Kuindersma: </strong>The big difference is how we programmed the behavior. The previous system was a more traditional robotics stack involving a combination of model-based controllers, planners, and machine learning models for perception all architected together to do end-to-end manipulation. Programming a new task on that system generally required roboticists or system integrators to touch code and tell the robot what to do. </p><p>For this new video, we replaced most of that system with a single neural network that was trained on demonstration data. This is much more flexible because there’s no task-specific programming or other open-ended creative engineering required. Basically, if you can teleoperate the robot to do a task, you can train the network to reproduce that behavior. This approach is more flexible and scalable because it allows people without advanced degrees in robotics to “program” the robot.</p><p><a href="#top">Back to top</a></p><p class="rm-anchors" id="lbm"><strong>We’re talking about a large behavior model (LBM) here, right? What would you call the kind of learning that this model does?</strong></p><p><strong>Kuindersma: </strong>It is a kind of imitation learning. We collect many teleoperation demonstrations and train a neural network to reproduce the input-output behaviors in the data. The inputs are things like raw robot camera images, natural language descriptions of the task, and proprioception, and the outputs are the same teleop commands sent by the human interface.</p><p>What makes it a large behavior model is that we collect data from many different tasks and, in some cases, many different robot embodiments, using all of that as training data for the robot to end up with a single policy that knows how to do many things. The idea is that by training the network on a much wider variety of data and tasks and robots, its ability to generalize will be better. As a field, we are still in the early days of gathering evidence that this is actually the case (our [Toyota Research Institute] collaborators are <a href="https://toyotaresearchinstitute.github.io/lbm1/" target="_blank">among those leading the charge</a>), but we expect it is true based on the empirical trends we see in robotics and other AI domains.</p><p><strong>So the idea with the behavior model is that it will be more generalizable, more adaptable, or require less training because it will have a baseline understanding of how things work?</strong></p><p><strong>Kuindersma: </strong>Exactly, that’s the idea. At a certain scale, once the model has seen enough through its training data, it should have some ability to take what it’s learned from one set of tasks and apply those learnings to new tasks. One of the things that makes these models flexible is that they are conditioned on language. We collect teleop demonstrations and then post-annotate that data with language, having humans or language models describing in English what is happening. The network then learns to associate these language prompts with the robot’s behaviors. Then, you can tell the model what to do in English, and it has a chance of actually doing it. At a certain scale, we hope it won’t take hundreds of demonstrations for the robot to do a task; maybe only a couple, and maybe way in the future, you might be able to just tell the robot what to do in English, and it will know how to do it, even if the task requires dexterity beyond simple object pick-and-place.</p><p><a href="#top">Back to top</a></p><p class="rm-anchors" id="unique"><strong>There are a lot of robot videos out there of robots doing stuff that might look similar to what we’re seeing here. Can you tell me how what Boston Dynamics and Toyota Research Institute are doing is unique?</strong></p><p><strong>Kuindersma: </strong>Many groups are using AI tools for robot demos, but there are some differences in our strategic approach. From our perspective, it’s crucial for the robot to perform the full breadth of humanoid manipulation tasks. That means, if you use a data-driven approach, you need to somehow funnel those embodied experiences into the dataset you’re using to train the model. We spent a lot of time building a highly expressive teleop interface for Atlas, which allows operators to move the robot around quickly, take steps, balance on one foot, reach the floor and high shelves, throw and catch things, and so on.</p><p>The ability to directly mirror a human body in real time is vital for Atlas to act like a real humanoid laborer. If you’re just standing in front of a table and moving things around, sure, you can do that with a humanoid, but you can do it with much cheaper and simpler robots, too. If you instead want to, say, bend down and pick up something from between your legs, you have to make careful adjustments to the entire body while doing manipulation. The tasks we’ve been focused on with Atlas over the last couple months have been focused more on collecting this type of data, and we’re committed to making these AI models extremely performant so the motions are smooth, fast, beautiful, and fully cover what humanoids can do.</p><p><strong>Is it a constraint that you’re using imitation learning, given that Atlas is built to move in ways that humans can’t? How do you expand the operating envelope with this kind of training? </strong></p><p><strong>Kuindersma: </strong>That’s a great question. There are a few ways to think about it:</p><ul><li>Atlas can certainly do things like continuous joint rotation that people can’t. While those capabilities might offer efficiency benefits, I would argue that if Atlas <em><em>only</em></em> behaved exactly like a competent human, that would be amazing, and we would be very happy with that.</li><li>We could extend our teleop interface to make available types of motions the robot can do but a person can’t. The downside is this would probably make teleoperation less intuitive, requiring a more highly trained expert, which reduces scalability.</li><li>We may be able to co-train our large behavior models with data sources that are not just teleoperation-based. For example, in simulation, you could use rollouts from reinforcement learning policies or programmatic planners as augmented demonstrations that include these high-range-of-motion capabilities. The LBM can then learn to leverage that in conjunction with teleop demonstrations. This is not just a hypothetical, we’ve actually found that co-training with simulation data has improved performance in the real robot, which is quite promising.</li></ul><p><strong>Can you tell me what Atlas was directed to do in the video? Is it primarily trying to mirror its human-based training, or does it have some capacity to make decisions?</strong> </p><p><strong>Kuindersma:</strong> In this case, Atlas is responding primarily to visual and language queues to perform the task. At our current scale and with the model’s training, there’s a limited ability to completely innovate behaviors. However, you can see a lot of variety and responsiveness in the details of the motion, such as where specific parts are in the bin or where the bin itself is. As long as those experiences are reflected somewhere in the training data, the robot uses its real-time sensor observations to produce the right type of response.</p><p><strong>So, if the bin was too far away for the robot to reach, without specific training, would it move itself to the bin?</strong> </p><p><strong>Kuindersma: </strong>We haven’t done that experiment, but if the bin was too far away, I think it might take a step forward because we varied the initial conditions of the bin when we collected data, which sometimes required the operator to walk the robot to the bin. So there is a good chance that it would step forward, but there is also a small chance that it might try to reach and not succeed. It can be hard to make confident predictions about model behavior without running experiments, which is one of the fun features of working with models like this.</p><p><a href="#top">Back to top</a></p><p class="rm-anchors" id="limit"><strong>It’s interesting how a large behavior model, which provides world knowledge and flexibility, interacts with this instance of imitation learning, where the robot tries to mimic specific human actions. How much flexibility can the system take on when it’s operating based on human imitation?</strong></p><p><strong>Kuindersma:</strong> It’s primarily a question of scale. A large behavior model is essentially imitation learning at scale, similar to a large language model. The hypothesis with large behavior models is that as they scale, generalization capabilities improve, allowing them to handle more real-world corner cases and require less training data for new tasks. Currently, the generalization of these models is limited, but we’re addressing that by gathering more data not only through teleoperating robots but also by exploring other scaling bets like non-teleop human demonstrations and sim/synthetic data. These other sources might have more of an “embodiment gap” to the robot, but the model’s ability to assimilate and translate between data sources could lead to better generalization.</p><p><strong>How much skill or experience does it take to effectively train Atlas through teleoperation?</strong> </p><p><strong>Kuindersma: </strong>We’ve had people on day tours jump in and do some teleop, moving the robot and picking things up. This ease of entry is thanks to our teams building a really nice interface: The user wears a VR headset, where they’re looking at a re-projection of the robot’s stereo RGB cameras, which are aligned to provide a 3D sense of vision, and there are built-in visual augmentations like desired hand locations and what the robot is actually doing to give people situational awareness.</p><p>So novice users can do things fairly easily, they’re probably not generating the highest quality motions for training policies. To generate high-quality data, and to do that consistently over a period of several hours, it typically takes a couple of weeks of onboarding. We usually start with manipulation tasks and then progress to tasks involving repositioning the entire robot. It’s not trivial, but it’s doable. The people doing it now are not roboticists; we have a team of ‘robot teachers’ who are hired for this, and they’re awesome. It gives us a lot of hope for scaling up the operation as we build more robots.</p><p><a href="#top">Back to top</a></p><p class="rm-anchors" id="data"><strong>How is what you’re doing different from other companies that might lean much harder on scaling through simulation? Are you focusing more on how humans do things?</strong></p><p><strong>Kuindersma: </strong>Many groups are doing similar things, with differences in technical approach, platform, and data strategy. You can characterize the strategies people are taking by thinking about a “data pyramid,” where the top of the pyramid is the highest quality, hardest-to-get data, which is typically teleoperation on the robot you’re working with. The middle of the pyramid might be egocentric data collected on people (e.g., by wearing sensorized gloves), simulation data, or other synthetic world models. And the bottom of the pyramid is data from YouTube or the rest of the Internet. </p><p>Different groups allocate finite resources to different distributions of these data sources. For us, we believe it’s really important to have as large a baseline of actual on-robot data (at the top of the pyramid) as possible. Simulation and synthetic data are almost certainly part of the puzzle, and we’re investing resources there, but we’re taking a somewhat balanced data strategy rather than throwing all of our eggs in one basket.</p><p><strong>Ideally you want the top of the pyramid to be as big as possible, right? </strong></p><p><strong>Kuindersma:</strong> Ideally, yes. But you won’t get to the scale you need by just doing that. You need the whole pyramid, but having as much high-quality data at the top as possible only helps.</p><p><strong>But it’s not like you can just have a super large bottom to the pyramid and not need the top?</strong></p><p><strong>Kuindersma: </strong>I don’t think so. I believe there needs to be enough high-quality data for these models to effectively translate into the specific embodiment that they are executing on. There needs to be enough of that “top” data for the translation to happen, but no one knows the exact distribution, like whether you need 5 percent real robot data and 95 percent simulation, or some other ratio.</p><p><a href="#top">Back to top</a></p><p class="rm-anchors" id="next"><strong>Is that a box of ‘<a href="https://punyo.tech/" target="_blank">Puny-os</a>’ on the shelf in the video?</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-right rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: right;"> <img alt="Robot handling a box beside a Boston Dynamics robot dog on a shelf." class="rm-shortcode" data-rm-shortcode-id="7de5bad987fe85e869dd762e07c2b7a9" data-rm-shortcode-name="rebelmouse-image" id="b4601" loading="lazy" src="https://spectrum.ieee.org/media-library/robot-handling-a-box-beside-a-boston-dynamics-robot-dog-on-a-shelf.png?id=61540466&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Part of this self-balancing robot.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Boston Dynamics</small></p><p><strong>Kuindersma: </strong>Yeah! Alex Alspach from [Toyota Research Institute] brought it in to put in the background as an easter egg. </p><p><strong>What’s next for Atlas?</strong></p><p><strong>Kuindersma: </strong>We’re really focused on maximizing the performance manipulation behaviors. I think one of the things that we’re uniquely positioned to do well is reaching the full behavioral envelope of humanoids, including mobile bimanual manipulation, repetitive tasks, and strength, and getting the robot to move smoothly and dynamically using these models. We’re also developing repeatable processes to climb the robustness curve for these policies—we think reinforcement learning may play a key role in achieving this. </p><p>We’re also looking at other types of scaling bets around these systems. Yes, it’s going to be very important that we have a lot of high-quality on-robot on task data that we’re using as part of training these models. But we also think there are real opportunities and being able to leverage other data sources, whether that’s observing or instrumenting human workers or scaling up synthetic and simulation data, and understanding how those things can mix together to improve the performance of our models.</p><p><a href="#top">Back to top</a></p>]]></description><pubDate>Sun, 07 Sep 2025 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/boston-dynamics-atlas-scott-kuindersma</guid><category>Boston dynamics</category><category>Humanoid robots</category><category>Toyota research institute</category><category>Robotics</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/boston-dynamics-robot-operating-autonomously-near-stacked-robotic-components-on-a-shelf.png?id=61540463&width=980"></media:content></item><item><title>Video Friday: Robot Vacuum Climbs Stairs</title><link>https://spectrum.ieee.org/video-friday-eufy-robot-vacuum</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robotic-vacuum-from-eufy-on-a-dimly-lit-staircase-in-black-surroundings.png?id=61558584&width=1245&height=700&coordinates=0%2C8%2C0%2C8"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="http://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><p class="rm-anchors" id="pcunho0cy9q">This is ridiculous and I love it.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1d8dc2352eddb99aac790509c3e1140f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/PcunhO0cy9Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.eufy.com/">Eufy</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="6epzte9ctzy"><em>At ICRA 2024, We met Paul Nadan to learn about how his LORIS robot climbs up walls by sticking itself to rocks.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="864971ef74d0a648d4a29547ec606c26" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/6EPzTe9cTzY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://paulnadan.com/">CMU</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="8gfuuzdn4q8">If a <a data-linked-post="2666662286" href="https://spectrum.ieee.org/humanoid-robots" target="_blank">humanoid robot</a> is going to load my dishwasher, I expect it to do so optimally, not all haphazardly like a puny human.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a103243a17e5da4c2bd987c599c074dd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8gfuUzDn4Q8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.figure.ai/">Figure</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="tofpkw6d3ge"><em>Humanoid robots have recently achieved impressive progress in locomotion and whole-body control, yet they remain constrained in tasks that demand rapid interaction with dynamic environments through manipulation. Table tennis exemplifies such a challenge: with ball speeds exceeding 5 m/s, players must perceive, predict, and act within sub-second reaction times, requiring both agility and precision. To address this, we present a hierarchical framework for humanoid table tennis that integrates a model-based planner for ball trajectory prediction and racket target planning with a reinforcement learning–based whole-body controller.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="716557784fc438288eb1a94f2867ab6e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/tOfPKW6D3gE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://hybrid-robotics.berkeley.edu/">Hybrid Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8ebcsdnsgsg"><em>Despite their promise, today’s <a data-linked-post="2650274945" href="https://spectrum.ieee.org/eu-project-developing-symbiotic-robotplant-biohybrids" target="_blank">biohybrid robots</a> typically underperform their fully synthetic counterparts and their potential as predicted from a reductionist assessment of constituents. Many systems represent enticing proofs of concept with limited practical applicability. Most remain confined to controlled laboratory settings and lack feasibility in complex real-world environments. Developing biohybrid robots is currently a painstaking, bespoke process, and the resulting systems are routinely inadequately characterized. Complex, intertwined relationships between component, interface, and system performance are poorly understood, and methodologies to guide informed design of biohybrid systems are lacking. The HyBRIDS ARC opportunity seeks ideas to address the question: How can synthetic and biological components be integrated to enable biohybrid platforms that outperform traditional robotic systems?</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6c654c545d057ef0b659d7a5c0863c91" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8eBcSDnSgsg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.darpa.mil/research/programs/hybrids">DARPA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="vw7z5_sm7xa"><em>Robotic systems will play a key role in future lunar missions, and a great deal of research is currently being conducted in this area. One such project is SAMLER-KI (Semi-Autonomous Micro Rover for Lunar Exploration Using Artificial Intelligence), a collaboration between the German Research Center for Artificial Intelligence (DFKI) and the University of Applied Sciences Aachen (FH Aachen), Germany. The project focuses on the conceptual design of a semi-autonomous micro rover that is capable of surviving lunar nights while remaining within the size class of a micro rover. During development, conditions on the Moon such as dust exposure, radiation, and the vacuum of space are taken into account, along with the 14-Earth-day duration of a lunar night.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c0e7cd40ad1ccd5b5d48a4a47340c445" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/VW7Z5_sm7xA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotik.dfki-bremen.de/en/research/projects/samler-ki">DFKI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8qw5l_yovz8"><em>ARMstrong Dex is a human-scale dual-arm hydraulic robot developed by the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. It is capable of lifting its own body through vertical pull-ups and manipulating objects over 50 kg, demonstrating strength beyond human capabilities. In this test, ARMstrong Dex used a handheld saw to cut through a thick 40×90 mm wood beam. Sawing is a physically demanding task involving repetitive force application, fine trajectory control, and real-time coordination.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2e43050095b9b311b771a310bc8478a9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8QW5L_yOVZ8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.kaeri.re.kr/eng/">KAERI</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="ox9003uok9s">This robot stole my “OMG I HAVE JUICE” face.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="59e62bac81e6d2ba46238e522854c327" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/oX9003UOK9s?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.pudurobotics.com/en/products/flashbot-arm">Pudu Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="wjbxhdskx4g">The best way of doging a punch to the face is to just have a big hole where your face should be.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d8c438cc6c93cc41484711af6dc141cc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/WjBXhdSkx4g?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I do wish they wouldn’t call it a combat robot, though.</p><p>[ <a href="https://www.unitree.com/">Unitree</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="p_tfzznhaqs">It really might be fun to have a DRC-style event for quadrupeds.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2eeb9989f52a0cc7da56df744bfaffd8" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/P_tfzznhAqs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.deeprobotics.cn/en">DEEP Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="zvnq0qirvp8"><em>CMU researchers are developing new technology to enable robots to physically interact with people who are not able to care for themselves. These breakthroughs are being deployed in the real world, making it possible for individuals with neurological diseases, stroke, multiple sclerosis, ALS and dementia to be able to eat, clean and get dressed fully on their own.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ef18f525ec90f73de852f4045d1fef52" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ZVNQ0qIrvP8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://rchi-lab.github.io/">CMU</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="acvscpemw4y"><em>Caracol’s additive manufacturing platforms use KUKA robotic arms to produce large-scale industrial parts with precision and flexibility. This video outlines how Caracol integrates multi-axis robotics, modular extruders, and proprietary software to support production in sectors like aerospace, marine, automotive, and architecture.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4255eb367f3abee372e7e4daeea9691a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AcvscpeMw4Y?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.kuka.com/it-it/settori/banca-dati-di-soluzioni/2025/09/caracol-case-study">KUKA</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="chyxgtjvn0i">There were a couple of robots at ICRA 2025, as you might expect.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ccc460cb8b8b8133b6e395a3954eb4fd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ChYXgTjVn0I?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://2026.ieee-icra.org/">ICRA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="mow0mvvlpc0"><em>On June 6, 1990, following the conclusion of Voyager’s planetary explorations, mission representatives held a news conference at NASA’s Jet Propulsion Laboratory in Southern California to summarize key findings and answer questions from the media. In the briefing, Voyager’s longtime project scientist Ed Stone, along with renowned science communicator Carl Sagan, also revealed the mission’s “Solar System Family Portrait,” a mosaic comprising images of six of the solar system’s eight planets. Carl Sagan was a member of the Voyager imaging team and instrumental in capturing these images and bringing them to the public.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ce23b7d63acaaa33f595b2d7eb8111f5" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/MoW0MVVLPc0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>Carl Sagan, man. Carl Sagan. Blue Dot unveil was right around 57:00, if you missed it.</p><p>[ <a href="https://www.jpl.nasa.gov/news/vintage-nasa-see-voyagers-1990-solar-system-family-portrait-debut/">JPL</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 05 Sep 2025 16:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-eufy-robot-vacuum</guid><category>Video friday</category><category>Robotics</category><category>Robots</category><category>Vacuum robots</category><category>Humanoid robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robotic-vacuum-from-eufy-on-a-dimly-lit-staircase-in-black-surroundings.png?id=61558584&width=980"></media:content></item><item><title>Do People Really Want Humanoid Robots in Their Homes?</title><link>https://spectrum.ieee.org/home-humanoid-robots-survey</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/three-by-three-grid-showing-various-humanoid-robots-performing-household-tasks-such-as-watering-plants-vacuuming-and-loading-l.jpg?id=61534540&width=1245&height=700&coordinates=0%2C428%2C0%2C429"/><br/><br/><p>I’ve been teaching robotics at the University of Washington for more than a decade. Every class begins with “robotics news of the week.” For years, humanoid robots appeared only occasionally—usually in the form of viral clips of the Boston Dynamics Atlas doing parkour or RoboCup humanoid league bloopers that served more as comic relief than serious news.</p><p>But over the past few years, things have shifted. Each week brings another humanoid demo, each flashier than the last, as companies race to showcase new capabilities. And behind those slick videos lies a flood of venture capital. Humanoid robotics has become a billion-dollar frenzy.</p><p>The scale of investment is astonishing. Just a year ago, <a href="https://www.cnbc.com/2024/02/29/robot-startup-figure-valued-at-2point6-billion-by-bezos-amazon-nvidia.html" rel="noopener noreferrer" target="_blank">Figure AI’s $2.6 billion valuation</a> seemed extraordinary—until its latest funding round catapulted it to <a href="https://techfundingnews.com/figure-ai-to-grab-1-5b-funding-at-39-5b-valuation-eyes-to-produce-100000-robots-what-about-competition/" rel="noopener noreferrer" target="_blank">$39.5 billion</a>. Investors large and small are rushing in, and tech giants like Microsoft, Amazon, OpenAI, and NVIDIA are scrambling to get a foothold for fear of missing out. Tesla is <a href="https://fortune.com/2025/03/23/tesla-billion-gone-astray-questions-controls/" rel="noopener noreferrer" target="_blank">pouring resources into its Optimus robot</a>, while China has committed more than <a href="https://www.cnn.com/2025/03/25/tech/china-robots-market-competitiveness-intl-hnk/index.html" rel="noopener noreferrer" target="_blank">$10 billion in government funding</a> to drive down costs and seize market dominance. Goldman Sachs now projects the global humanoid market could reach <a href="https://www.goldmansachs.com/insights/articles/the-global-market-for-robots-could-reach-38-billion-by-2035" rel="noopener noreferrer" target="_blank">$38 billion by 2035</a>.</p><p>This surge of interest reflects a long-standing dream in robotics: if machines could match human form and function, they could simply step into human jobs without requiring us to change our environments. If humanoids could do everything people can, then in theory they could replace workers on the factory floor or in warehouse aisles. It’s no surprise, then, that many humanoid companies are targeting what they believe are sectors with labor shortages and undesirable jobs—<a href="https://bostondynamics.com/webinars/why-humanoids-are-the-future-of-manufacturing/" rel="noopener noreferrer" target="_blank">manufacturing</a>, <a href="https://www.agilityrobotics.com/industries/third-party-logistics" rel="noopener noreferrer" target="_blank">logistics</a>, <a href="https://www.agilityrobotics.com/industries/distribution" rel="noopener noreferrer" target="_blank">distribution</a>, <a href="https://apptronik.com/industries/retail" rel="noopener noreferrer" target="_blank">retail</a>—as near-term markets.</p><h2>Factories first, homes next?</h2><p><span></span><span>A subset of humanoid companies see homes as the next frontier. </span><a href="https://www.figure.ai/master-plan" target="_blank">Figure AI claims</a><span> humanoids will revolutionize “assisting individuals in the home” and “caring for the elderly.” Its marketing materials show robots handing an apple to a human, making coffee, putting away groceries and dishes, pouring drinks, and watering plants. </span><a href="https://www.youtube.com/live/6v6dbxPlsXs?t=1256s" target="_blank">Tesla’s Optimus</a><span> similarly branded as an “autonomous assistant, humanoid friend,” is shown folding clothes, cracking eggs, unloading groceries, receiving packages, and even playing family games. The </span><a href="https://www.1x.tech/neo" target="_blank">Neo humanoid by 1X Technologies</a><span> appears targeted solely at in-home use, with the company declaring that “1X bets on the home” and is “building a world where we do more of what we love, while our humanoid companions handle the rest.” Neo is depicted vacuuming, serving tea, wiping windows and tables, and carrying laundry and grocery bags.</span></p><p>All these glossy marketing videos struck a personal chord with me. I have always dreamed of robots in homes—and I know <a href="https://spectrum.ieee.org/when-will-we-have-robots-to-help-with-household-chores" target="_self">I am not alone.</a> Like many roboticists of my generation, my earliest memories of robots trace back to Rosie the Robot from <a href="https://en.wikipedia.org/wiki/List_of_The_Jetsons_characters" target="_blank">The Jetsons</a>. I dedicated my career to getting assistive robots into homes. In 2014, my students and I placed a <a href="https://robotsguide.com/robots/pr2" target="_blank">PR2 robot</a> in a home in Arizona, where it failed miserably at most tasks—though we learned a great deal in the process. Later, I was part of more successful in-home deployments of a <a href="https://spectrum.ieee.org/hello-robots-stretch-mobile-manipulator" target="_self">Stretch robot</a> and an <a href="https://ieeexplore.ieee.org/abstract/document/10974182" target="_blank">assistive feeding robot</a>. I even found myself enjoying housework because it gave me a chance to <a href="https://ieeexplore.ieee.org/document/6483517" target="_blank">analyze the tasks it entailed</a> with an eye toward someday automating them. For years, I promoted my work under a personal motto: “I want robots to do all the chores by the time I retire,” often joking that I might never retire. </p><p>Yet when billion-dollar companies began chasing the same dream, I found myself reacting with unease. I had always imagined that home robots would be more like Rosie—robotic and cartoonish—and my own research moved further and further away from the human form because non-humanoid robots were more practical and preferred by users. I struggled to picture a humanoid in my own house–-or any of the homes where I had deployed robots in. And after years of human-centered research in robotics, I could not imagine users welcoming humanoids into their homes without hesitation. Still, I assumed someone must want them. Surely some fraction of those billions had gone into market research and customer insight. And I wanted to know what they knew.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Six multi-use robots: Figure, Optimus, Neo, PR2, Fetch, Stretch, showcasing both humanoid and non-humanoid designs." class="rm-shortcode" data-rm-shortcode-id="fa45a4e6ad69be2d68fcb584b9e4d612" data-rm-shortcode-name="rebelmouse-image" id="98a50" loading="lazy" src="https://spectrum.ieee.org/media-library/six-multi-use-robots-figure-optimus-neo-pr2-fetch-stretch-showcasing-both-humanoid-and-non-humanoid-designs.jpg?id=61540703&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">(Left) Three real-world humanoids shown to participants in our study (Figure, Optimus, Neo). (Right) Examples of three general-purpose robots with few or no human-like features (PR2, Fetch, Stretch).</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Maya Cakmak</small></p><h2>What people actually think</h2><p>To find out, my students and I set out to better understand what the public thinks about humanoid robots in the home. We surveyed 76 participants from the U.S. and the U.K., asking whether they considered humanoids in the home acceptable, which designs they preferred, and why. We also presented them with imagined scenarios where either a humanoid or a special-purpose robot assisted an older adult with tasks like eating, dressing, or vacuuming, and asked which they would choose. The results are detailed in our paper “<a href="https://ieeexplore.ieee.org/xpl/conhome/1000636/all-proceedings" target="_blank">Attitudes Towards Humanoid Robots for In-Home Assistance</a>,” presented this week at the <a href="https://www.ro-man2025.org/" target="_blank">IEEE International Conference on Robot and Human Interactive Communication</a> (RO-MAN).</p><p>Our survey showed that people generally prefer special purpose robots over humanoids. They see special-purpose robots as safer, more private, and ultimately more comfortable to have in their homes and around loved ones. So, while humanoid companies (and their investors) dream of a single humanoid capable of doing it all, our survey participants seem to be more on board with a toolbox of smaller, specialized machines for most tasks: a Roomba for cleaning, a medication dispenser for pills, a stairlift for stairs. </p><p>Nevertheless, most survey participants considered humanoids in the home acceptable. Some even preferred humanoids for certain tasks, especially when the special-purpose alternative was more speculative—like a dressing assistant robot. When shown images of Neo, Figure 02, and Optimus performing household tasks, they agreed the robots looked useful and well-suited for homes. Many said they would feel comfortable having one in their own home—or in the home of a loved one. Of course, we had framed the scenarios optimistically: participants were told to assume the robots had passed extensive safety testing, were approved by regulators, and would be covered by insurance—assumptions that may be decades away from reality. And we can safely assume that finding humanoids “acceptable” doesn’t mean people actually want them—or that they’d be willing to pay for one. </p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Robots (both humanoid and special purpose) assist elderly with eating, dressing, and doing chores in various household tasks in this AI generated figure." class="rm-shortcode" data-rm-shortcode-id="6c4209b55791428d727b50ca5908458b" data-rm-shortcode-name="rebelmouse-image" id="76017" loading="lazy" src="https://spectrum.ieee.org/media-library/robots-both-humanoid-and-special-purpose-assist-elderly-with-eating-dressing-and-doing-chores-in-various-household-tasks-in.png?id=61540477&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">AI-generated images of humanoid and special purpose robots across eight tasks used in our questionnaires.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Cakmak et al, 2025</small></p><h2>Are home humanoids safe?</h2><p>Unsurprisingly the task context impacted whether people were open to humanoids in the home. Participants balked at imaginary scenarios involving safety-critical assistance—such as being carried down a staircase—responding with visceral rejections like “absolutely not in a million years.” Whereas for tasks that require little interaction—such as folding laundry—most were willing to imagine a humanoid lending a hand.</p><p>Even with our reassurances about safety, people readily imagined hazards: humanoids could trip, stumble, or tip over; they might glitch, run out of battery, or malfunction. The idea of a robot handling hot surfaces or sharp objects were also mentioned by multiple participants as serious concerns.</p><p>Privacy was another major concern. Participants worried about camera data being sent to the cloud or robots being remotely controlled by strangers. Several pointed out the security risks—any internet-connected device, they noted, could be hacked.</p><p>Even participants who saw clear benefits often described a lingering unease. Several described the robots as “creepy” or “unsettling,” and a few explicitly mentioned the <a href="https://spectrum.ieee.org/what-is-the-uncanny-valley" target="_self">uncanny valley effect</a>, pointing in particular to the black face masks common on this new generation of humanoids. One participant described the masks as creating an “eerie sensation, the idea that something might be watching you.” I felt a similar conflict watching a video of Neo (1X) <a href="https://www.1x.tech/neo" target="_blank">sitting on a couch after finishing its chores</a>—a scene that was meant to be comforting but instead left me unsettled.</p><p>A common reason participants preferred special-purpose robots was space. Humanoids were described as “bulky” and “unnecessary,” while specialized robots were seen as “less intrusive” and “more discreet.” These comments reminded me of user research conducted in Japan by the Toyota Research Institute, which led to a <a href="https://spectrum.ieee.org/toyota-research-ceiling-mounted-home-robot" target="_self">ceiling-mounted robot design</a> after finding that limited floor space was a major barrier to adoption. The same thought struck me at home when I showed an in-home humanoid video to my nine-year-old and asked if we should get one. He replied: “But we don’t have an extra bed.” His answer nailed the point: if your home doesn’t have room for another human, it probably doesn’t have room for a humanoid.</p><h2>Very big ifs</h2><p>In the end, the study didn’t fully answer my question about what these companies know that I don’t. Participants said they would accept humanoids—if they were safe, worked reliably, and didn’t cost more than the alternatives. Those are very big ifs.</p><p>And of course, our study asked people to use their imaginations. Looking at a picture is not the same as sharing your living room with a six-foot metal figure that moves—in reality, their reactions might be very different. Likewise, picturing yourself someday needing help with eating, dressing, or walking is very different from already relying on that help every day. Perhaps for those already living with these needs, the immediacy of their situation would make the promise of humanoids more compelling.</p><p>To probe further, I asked the same question to a panel of six people with motor limitations who are experienced users of assistive robots at the <a href="https://caregivingrobots.github.io/" target="_blank">HRI 2025 Physical Caregiving Robots Workshop</a>. Not one of them wanted a humanoid. Their concerns ranged from “it’s creepy” to “it has to be 100 percent safe because I cannot escape it.” One panelist summed it up perfectly: “Trying to make assistive robots with humanoids would be like trying to make autonomous cars by putting humanoids in the driver’s seat and asking them to drive like a human.” After all, it was obvious to investors that the better path to autonomous vehicles was to modify or redesign vehicles for autonomy, rather than replicate human drivers. So why are they convinced that replicating humans is the right solution for the home?</p><h2>What’s the alternative?</h2><p>Special-purpose robots may be preferable to humanoids, but building a dedicated machine for every possible task is unrealistic. Homes involve a long tail of chores, and general-purpose robots could indeed provide enormous value. However, the humanoid form is likely overkill, since much simpler designs—such as wheeled robots with basic pinch grippers—can already accomplish a great deal and are far more attainable. And people will likely accept modest <a href="https://www.wired.com/story/optimize-your-home-for-robots/" target="_blank">changes to their homes</a> to expand what these robots can do, just as <a href="https://link.springer.com/chapter/10.1007/978-3-540-74853-3_9" target="_blank">Roomba owners move furniture</a> to let their vacuums work. After all, our homes have already transformed around new technologies—cars, appliances, televisions—so why not for robots, if they prove just as valuable?</p><p>But beyond the unnecessary complexity, a more important issue about the humanoid form may be that users find it less desirable than simpler alternatives. Research has long shown that highly <a href="https://link.springer.com/chapter/10.1007/978-3-030-49788-0_19" target="_blank">human-like robots can trigger negative emotional responses</a>, and our study suggests that is true of the latest generation of humanoids. Simpler designs with more <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC8297987/" target="_blank">cartoon-like features are more likely to be accepted</a> as companions. We may even want home robots with no human-like features at all, so they can be viewed as tools rather than social agents. I believe those who would benefit most from in-home robots—including the <a href="https://www.who.int/news-room/fact-sheets/detail/ageing-and-health" target="_blank">rapidly growing population of older adults</a>—would prefer robots that empower them to do things for themselves, rather than ones that attempt to replace human caregivers. Yet humanoid companies are openly pursuing the latter.</p><p>Only time will tell whether humanoid companies can deliver on their promises—and whether people, myself included, will welcome them into their homes. I hope our findings encourage these companies to dig deeper and share their insights about in-home humanoid customers. I’d also like to see more capital directed toward alternative robot designs for the home. In the meantime, my students and I can’t wait to get our hands on one of these humanoids—purely in the name of science—bring it to older adults in our communities, and hear their unfiltered reactions. I can already imagine someone saying, “It better not sit in my recliner when I’m not looking,” or, “If it’s going to live here, it better pay rent.”</p>]]></description><pubDate>Wed, 03 Sep 2025 12:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/home-humanoid-robots-survey</guid><category>Humanoid robots</category><category>Guest article</category><category>Home robots</category><category>Robotics</category><dc:creator>Maya Cakmak</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/three-by-three-grid-showing-various-humanoid-robots-performing-household-tasks-such-as-watering-plants-vacuuming-and-loading-l.jpg?id=61534540&width=980"></media:content></item><item><title>Connecting Africa’s Next Generation of Engineers</title><link>https://spectrum.ieee.org/africa-s-next-generation-engineers</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/two-people-posing-with-a-yellow-robot-dog-at-a-tech-event.png?id=61525300&width=1245&height=700&coordinates=0%2C125%2C0%2C125"/><br/><br/><p>I get a lot of email from people asking to contribute to <em><em>IEEE Spectrum</em></em>. Usually, they want to write an article for us. But one bold query I received in January 2024 went much further: An undergraduate engineering student named <a href="https://www.linkedin.com/in/oluwatosin-kolade/?originalSubdomain=ng" rel="noopener noreferrer" target="_blank">Oluwatosin Kolade</a>, from Obafemi Awolowo University, in Ilé-Ifẹ̀, Nigeria, volunteered to be our robotics editor. </p><p>Kolade—Tosin to his friends—had been the newsletter editor for his IEEE student branch, but he’d never published an article professionally. His earnestness and enthusiasm were endearing. I explained that we already have a <a href="https://spectrum.ieee.org/u/evan-ackerman" target="_self">robotics editor</a>, but I’d be glad to work with him on writing, editing, and ultimately publishing an article. </p><p> Back in 2003, I had met plenty of engineering students when I traveled to <a href="https://spectrum.ieee.org/surf-africa" target="_self">Nigeria to report </a>on the SAT-3/WASC cable, the first undersea fiber-optic cable to land in West Africa. I remember seeing students gathering around obsolete PCs at Internet cafés connected to the world via a satellite dish powered by a generator. I challenged Tosin to tell <em><em>Spectrum</em></em> readers what it’s like for engineering students today. The result is “<a href="https://spectrum.ieee.org/stem-education-in-africa" target="_blank">Lessons from a Janky Drone</a>.”</p><p>I decided to complement Tosin’s piece with the perspective of a more established engineer in sub-Saharan Africa. I reached out to <a href="https://spectrum.ieee.org/u/g-pascal-zachary" target="_self">G. Pascal Zachary</a>, who has covered engineering education in Africa for us, and Zachary introduced me to <a href="https://ibaino.net/" rel="noopener noreferrer" target="_blank">Engineer Bainomugisha</a>, a computer science professor at Makerere University, in Kampala, Uganda. In “<a data-linked-post="2673897395" href="https://spectrum.ieee.org/africa-engineering-hardware" target="_blank">Learning More With Less</a>,” Bainomugisha draws out the things that were common to his and Tosin’s experience and suggests ways to make the hardware necessary for engineering education more accessible.</p><p>In fact, the region’s decades-long struggle to develop its engineering talent hinges on access to the three things we focus on in this issue: reliable electricity, ubiquitous broadband, and educational resources for young engineers.</p><p class="pull-quote">“During my weekly video calls with Tosin...the connection was pretty good— except when it wasn’t.”</p><p><span>Zachary’s article in this issue, “</span><a data-linked-post="2673881191" href="https://spectrum.ieee.org/electricity-access-sub-saharan-africa" target="_blank">What It Will Really Take to Electrify All of Africa</a><span>”</span><span> tackles the first topic, with a focus on an ambitious initiative to bring electricity to an additional 300 million people by 2030.</span></p><p> Contributing editor <a href="https://spectrum.ieee.org/u/lucas-laursen" target="_self">Lucas Laursen</a>’s article, “<a data-linked-post="2673856838" href="https://spectrum.ieee.org/broadband-internet-in-nigeria" target="_blank">In Nigeria, Why Isn’t Broadband Everywhere?</a>” investigates the slow rollout of fiber-optic connectivity in the two decades since my first visit. As he learned when he traveled to Nigeria earlier this year, the country now has eight undersea cables delivering 380 terabits of capacity, yet less than half of the population has broadband access. </p><p>I got a sense of Nigeria’s bandwidth issues during my weekly video calls with Tosin to discuss his article. The connection was pretty good, except when it wasn’t. Still, I reminded myself, two decades ago such calls would have been nearly impossible. </p><p>Through those weekly chats, we established a professional connection, which made it that much more meaningful when I got to meet Tosin in person this past May at the <a href="https://2025.ieee-icra.org/" target="_blank">IEEE ICRA robotics conference</a>, in Atlanta. Tosin was attending thanks to a scholarship from the <a href="https://www.ieee-ras.org/" rel="noopener noreferrer" target="_blank">IEEE Robotics and Automation Society</a>. Like a kid in a candy shop, he kibbutzed with fellow scholarship winners, attended talks, checked out robots, and met the engineers who built them. </p><p>As Tosin embarks on the next leg in his career journey, he is supported by the IEEE community, which not only recognizes his promise but gives him access to a network of professionals who can help him and his cohort realize their potential.</p>]]></description><pubDate>Mon, 01 Sep 2025 10:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/africa-s-next-generation-engineers</guid><category>Engineering education</category><category>Makerspaces</category><category>Higher education</category><category>3d printers</category><category>Arduino</category><dc:creator>Harry Goldstein</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/two-people-posing-with-a-yellow-robot-dog-at-a-tech-event.png?id=61525300&width=980"></media:content></item><item><title>Video Friday: Spot’s Got Talent</title><link>https://spectrum.ieee.org/video-friday-synchronized-dancing-robots</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/four-legged-robot-dog-doing-three-backflips-in-industrial-setting-with-equipment-and-yellow-stairs.gif?id=61532173&width=1245&height=700&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="o_iuqgxrrae"><em>Boston Dynamics is back and their dancing robot dogs are bigger, better, and bolder than ever! Watch as they bring a “dead” robot to life and unleash a never before seen synchronized dance routine to “Good Vibrations.”</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6b3bc23cd0d30d622765a4103c587403" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/o_iUqGxRRAE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>And much more interestingly, here’s a discussion of how they made it work:</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="691bac971231bd16e539ba9599b30f45" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/LMPxtcEgtds?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://bostondynamics.com/blog/spot-takes-the-stage/">Boston Dynamics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="4cxc0qjm82k">I don’t especially care whether a <a data-linked-post="2658800413" href="https://spectrum.ieee.org/humanoid-robot-falling" target="_blank">robot falls over</a>. I care whether it gets itself back up again.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a0cac494be632a4b0e255bbbb4a8d0e4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/4cxC0qjm82k?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="bxdlxf7bnqq"><em>The robot autonomously connects multiple wires to the environment using small flying anchors—drones equipped with anchoring mechanisms at the wire tips. Guided by an onboard RGB-D camera for control and environmental recognition, the system enables wire attachment in unprepared environments and supports simultaneous multi-wire connections, expanding the operational range of wire-driven robots.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7e08d4624013d480add8301acc352ca5" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/BXdlXf7BNQQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://shin0805.github.io/flying-anchor/">JSK Robotics Laboratory</a> ] at [ <a href="http://www.jsk.t.u-tokyo.ac.jp/" target="_blank">University of Tokyo</a> ]</p><p>Thanks, Shintaro!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="kgmwidtcyo0">For a robot that barely has a face, this is some pretty good emoting.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7f32bd7db00f18487156251fb2fea84a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/kGMWiDTCyo0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.pollen-robotics.com/reachy-mini/">Pollen</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="rs_mtkviazy"><em>Learning skills from human motions offers a promising path toward generalizable policies for whole-body humanoid control, yet two key cornerstones are missing: (1) a scalable, high-quality motion tracking framework that faithfully transforms kinematic references into robust, extremely dynamic motions on real hardware, and (2) a distillation approach that can effectively learn these motion primitives and compose them to solve downstream tasks. We address these gaps with BeyondMimic, a real-world framework to learn from human motions for versatile and naturalistic humanoid control via guided diffusion.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="29bf0b0ac3b910749943d2eefd1300cd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/RS_MtKVIAzY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://hybrid-robotics.berkeley.edu/">Hybrid Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="_akfhkcne0s"><em>Introducing our open-source metal-made bipedal robot MEVITA. All components can be procured through e-commerce, and the robot is built with a minimal number of parts. All hardware, software, and learning environments are released as open source.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c9c9fa2b40b749672512083b73f91b37" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/_akfHkCne0s?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://haraduka.github.io/mevita-hardware/">MEVITA</a> ]</p><p>Thanks, Kento!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="wubhxe-mpaq">I’ve always thought that being able to rent robots (or exoskeletons) to help you move furniture or otherwise carry stuff would be very useful.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="cac8863f98e697737af68225ad1ad4ed" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/WuBHxe-MPaQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.deeprobotics.cn/en">DEEP Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="od0qvdwgvyo"><em>A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot. The discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3e7605bb8e079ccfc126cef88d95c9c2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/oD0qvdwGvyo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://coe.gatech.edu/news/2025/08/tiny-fans-feet-water-bugs-could-lead-energy-efficient-mini-robots">Georgia Tech</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gugwb6wxcfo"><em>Dynamic locomotion of legged robots is a critical yet challenging topic in expanding the operational range of mobile robots. To achieve generalized legged locomotion on diverse terrains while preserving the robustness of learning-based controllers, this paper proposes to learn an attention-based map encoding conditioned on robot proprioception, which is trained as part of the end-to-end controller using reinforcement learning. We show that the network learns to focus on steppable areas for future footholds when the robot dynamically navigates diverse and challenging terrains.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3cf4a6ff1c4cda069475872f91248850" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GUgwB6WxcFo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arxiv.org/abs/2506.09588">Paper</a> ] from [ <a href="https://rsl.ethz.ch/" target="_blank">ETH Zurich</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="oeuh89bwrl4"><em>In the fifth installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots <a data-linked-post="2650275007" href="https://spectrum.ieee.org/astro-teller-captain-of-moonshots-at-x" target="_blank">Astro Teller</a> sits down with Google DeepMind’s Chief Scientist Jeff Dean for a conversation about the origin of Jeff’s pioneering work scaling neural networks. They discuss the first time AI captured Jeff’s imagination, the earliest Google Brain framework, the team’s stratospheric advancements in image recognition and speech-to-text, how AI is evolving, and more.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3e0b8f6d91c9ba85f7c86b72baf74921" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/OEuh89BWRL4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.youtube.com/@XTheMoonshotFactory/podcasts">Moonshot Podcast</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 29 Aug 2025 16:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-synchronized-dancing-robots</guid><category>Video friday</category><category>Dancing robots</category><category>Humanoid robots</category><category>Robotics</category><category>Quadruped robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/four-legged-robot-dog-doing-three-backflips-in-industrial-setting-with-equipment-and-yellow-stairs.gif?id=61532173&width=980"></media:content></item><item><title>Video Friday: Inaugural World Humanoid Robot Games Held</title><link>https://spectrum.ieee.org/world-humanoid-robot-games</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robots-racing-on-a-blue-track-while-spectators-watch-in-a-stadium.png?id=61501726&width=1245&height=700&coordinates=0%2C43%2C0%2C44"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="-xc8cs47lcc"><em>The First World Humanoid Robot Games Conclude Successfully! Unitree Strikes Four Golds (1500m, 400m, 100m Obstacle, 4×100m Relay).</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a032c6342bdd306eb7b990a515bf0972" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/-Xc8cs47LCc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.unitree.com/">Unitree</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="j-cofmqd-ss"><em>Steady! PNDbotics Adam has become the only full-size humanoid robot athlete to successfully finish the 100m Obstacle Race at the World Humanoid Robot Games!</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="8063a2a4bd12fde3329069353544d401" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/J-COfmQD-Ss?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://pndbotics.com/">PNDbotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="_vdo5ynys_w"><em>Introducing Field Foundation Models (FFMs) from FieldAI—a new class of “physics-first” foundation models built specifically for embodied intelligence. Unlike conventional vision or language models retrofitted for robotics, FFMs are designed from the ground up to grapple with uncertainty, risk, and the physical constraints of the real world. This enables safe and reliable robot behaviors when managing scenarios that they have not been trained on, navigating dynamic, unstructured environments without prior maps, GPS, or predefined paths.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b4991f44490462d8e919fd753762a617" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/_vDo5YnYs_w?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.fieldai.com/">Field AI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="0iqzsmvcqis"><em>Multiply Labs, leveraging Universal Robots’ collaborative robots, has developed a groundbreaking robotic cluster that is fundamentally transforming the manufacturing of life-saving cell and gene therapies. The Multiply Labs solution drives a staggering 74% cost reduction and enables up to 100x more patient doses per square foot of cleanroom.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e31134ffe39ceb67e5bcaadbcd7feb25" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/0iqZsmvCqis?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.universal-robots.com/case-stories/multiply-labs/">Universal Robots</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gmyo-cum1ka"><em>In this video, we put Vulcan V3, the world’s first ambidextrous humanoid robotic hand capable of performing the full American Sign Language (ASL) alphabet, to the ultimate test—side by side with a real human!</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c8c9fc776aab009f79d8d9ff5c74df0f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GmYO-Cum1KA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://hackaday.io/project/203847-ambidextrous-23-direct-drive-humanoid-robotic-hand">Hackaday</a> ]</p><p>Thanks, Kelvin!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="_og1egust-i">More robots need to have this form factor.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a4cdca5959880eccc1dc8c193fcc0349" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/_og1egUst-I?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://engineering.tamu.edu/news/2025/08/from-sea-to-space-this-robot-is-on-a-roll.html">Texas A&M University</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="pksef2rtqzy">Robotic vacuums are so pervasive now that it’s easy to forget how much of an icon the <a data-linked-post="2650267122" href="https://spectrum.ieee.org/video-friday-an-arm-for-your-partybot-and-irobot-turns-10" target="_blank">iRobot Roomba</a> has been.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="805e910a12019237e6add751163fc622" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/PKSEF2RtqZY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.irobot.com/">iRobot</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="ujqjg1xk8a8">This is quite possibly the largest <a data-linked-post="2650254250" href="https://spectrum.ieee.org/dlr-super-robust-robot-hand" target="_blank">robotic hand</a> I’ve ever seen.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6617f3142cdf5f53e820ab090176fb10" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ujQJG1xk8A8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://moonshot-cafe-project.org/en/">CAFE Project</a> ] via [ <a href="https://built.itmedia.co.jp/bt/articles/2508/18/news082.html">BUILT</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="rz_xbnbituq"><em>Modular robots built by Dartmouth researchers are finding their feet outdoors. Engineered to assemble into structures that best suit the task at hand, the robots are pieced together from cube-shaped robotic blocks that combine rigid rods and soft, stretchy strings whose tension can be adjusted to deform the blocks and control their shape.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5b45024fd761ce33bd7efc99dd7ac3a6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Rz_xBnbituQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://home.dartmouth.edu/news/2025/08/multipurpose-robots-take-shape">Dartmouth</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="ptxede_xbro"><em>Our quadruped robot X30 has completed extreme-environment missions in Hoh Xil—supporting patrol teams, carrying vital supplies and protecting fragile ecosystems.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c86d59dfac00af07eda77f6e89557a03" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pTxEdE_Xbro?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.deeprobotics.cn/en">DEEP Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="u_7nt4eq1ns"><em>We propose a base-shaped robot named “koboshi” that moves everyday objects. This koboshi has a spherical surface in contact with the floor, and by moving a weight inside using built-in motors, it can rock up and down, and side to side. By placing everyday items on this koboshi, users can impart new movement to otherwise static objects. The koboshi is equipped with sensors to measure its posture, enabling interaction with users. Additionally, it has communication capabilities, allowing multiple units to communicate with each other.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e2ed91840821eb78809192d271435bbe" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/u_7nt4eQ1ns?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arxiv.org/html/2508.13509v1">Paper</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="bdw9f7aihas"><em>Bi-LAT is the world’s first Vision-Language-Action (VLA) model that integrates bilateral control into imitation learning, enabling robots to adjust force levels based on natural language instructions.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e3ad454ddbbc8d41754470ae30b8def9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/bdw9F7AIHas?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mertcookimg.github.io/bi-lat/">Bi-LAT</a> ] to be presented at [ <a href="https://www.ro-man2025.org/" target="_blank">IEEE RO-MAN 2025</a> ]</p><p>Thanks, Masato!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="pmcorujdxkq">Look at this jaunty little guy!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="fe73f006b2c8ea1062f763a30e6d2cbe" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pmcOrUjDXKQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>Although, they very obviously cut the video right before it smashes face-first into furniture more than once.</p><p>[ <a href="https://gepetto.github.io/BoltLocomotion/">Paper</a> ] to be presented at [ <a href="https://2025humanoids.org/" target="_blank">2025 IEEE-RAS International Conference on Humanoid Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="kqeqdcuyo50"><em>This research has been conducted at the Human Centered Robotics Lab at UT Austin. The video shows our latest experimental bipedal robot, dubbed Mercury, which has passive feet. This means that there are no actuated ankles, unlike humans, forcing Mercury to gain balance by dynamically stepping.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="40c1c4a35c1cac4390cc192af6e5ab86" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/kqEqDCuYO50?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://sites.utexas.edu/hcrl/">University of Texas at Austin Human Centered Robotics Lab</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="yyldielqgic"><em>We put two RIVR delivery robots to work with an autonomous vehicle—showing how Physical AI can handle the full last mile, from warehouse to consumers’ doorsteps.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0d0e0b38d47b76f0ac8e5ed5ba246f39" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/YyLDIelqgic?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.rivr.ai/">Rivr</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="3fh94e2vpmq"><em>The KR TITAN ultra is a high-performance industrial robot weighing 4.6 tonnes and capable of handling payloads up to 1.5 tonnes.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="be237d0fd193d9bbfcaceafd7a282b4c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/3fh94e2vPMQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.kuka.com/kr-titan-ultra?sc_camp=E45C2ED3B08848A6B2E310E0E28BB294">Kuka</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gpx3-qnrhl4"><em>CMU MechE’s Ding Zhao and Ph.D. student Yaru Niu describe LocoMan, a robotic assistant they have been developing.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1f370f0fc2719d8f2e31b0909a9cd8df" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/gPx3-QnrHl4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://safeai-lab.github.io/">Carnegie Mellon University</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="dkbgvo80c_u"><em>Twenty-two years ago, Silicon Valley executive Henry Evans had a massive stroke that left him mute and paralyzed from the neck down. But that didn’t prevent him from becoming a leading advocate of adaptive robotic tech to help disabled people—or from writing country songs, one letter at a time. Correspondent John Blackstone talks with Evans about his upbeat attitude and unlikely pursuits.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bcfb5420c867159696934a1fe77b4aa5" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/DKbGvO80C_U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.cbsnews.com/video/a-robotics-activists-remarkable-crusade/">CBS News</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 22 Aug 2025 15:30:04 +0000</pubDate><guid>https://spectrum.ieee.org/world-humanoid-robot-games</guid><category>Video friday</category><category>Robotics</category><category>Humanoid robots</category><category>Robot games</category><category>Robot ai</category><category>Quadruped robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robots-racing-on-a-blue-track-while-spectators-watch-in-a-stadium.png?id=61501726&width=980"></media:content></item><item><title>What I Learned From a Janky Drone</title><link>https://spectrum.ieee.org/stem-education-in-africa</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/four-young-black-men-at-desks-in-a-lecture-hall.png?id=61482540&width=1245&height=700&coordinates=0%2C183%2C0%2C184"/><br/><br/><p><strong>The package containing the</strong> ArduCopter 2.8 board finally arrived from China, bearing the weight of our anticipation. I remember picking it up, the cardboard box weathered slightly from its journey. As I tore through the layers of tape, it felt like unwrapping a long-awaited gift. But as I lifted the ArduCopter 2.8 board out of the box, my heart sank. The board, which was to be the cornerstone of our project, looked worn out and old, with visible scuffs and bent pins. This was just one of a cascade of setbacks my team would face.</p><p>It all started when I was assigned a project in machine design at <a href="https://oauife.edu.ng/" rel="noopener noreferrer" target="_blank">Obafemi Awolowo University</a> (OAU), located in the heart of Ilé-Ifẹ̀, an ancient Yoruba city in Osun State, in southwest Nigeria, where I am a mechanical engineering student entering my final year of a five-year program. OAU is one of Nigeria’s oldest and most prestigious universities, known for its beautiful campus and architecture. Some people I know refer to it as the “Stanford of Nigeria” because of the significant number of brilliant startups it has spun off. Despite its reputation, though, OAU—like every other federally owned institution in Nigeria—is underfunded and <a href="https://punchng.com/our-education-not-bargaining-chip-oau-students-lament-lecturers-strike/" rel="noopener noreferrer" target="_blank">plagued by faculty strikes</a>, leading to interruptions in academics. The lack of funding means students must pay for their undergraduate projects themselves, making the success of any project heavily dependent on the students’ financial capabilities.</p><h3>The Student & the Professor</h3><br/><p><strong>Two perspectives on engineering education in Africa</strong></p><p><em>Johnson I. Ejimanya is a one-man pony express. Walking the exhaust-fogged streets of Owerri, Nigeria, Ejimanya, the engineering dean of the Federal University of Technology, Owerri, carries with him a department’s worth of communications, some handwritten, others on disk. He’s delivering them to a man with a PC and an Internet connection who converts the missives into e-mails and downloads the responses. To Ejimanya, broadband means lugging a big bundle of printed e-mails back with him to the university, which despite being one of the country’s largest and most prestigious engineering schools, has no reliable means of connecting to the Internet.</em></p><p>I met Ejimanya when I visited Nigeria in 2003 to report on how the SAT-3/WASC, the first undersea fiber-optic cable to connect West Africa to the world, was being used. (The passage above is from my February 2004 <em>IEEE</em> <em>Spectrum</em> article “<a href="https://spectrum.ieee.org/surf-africa" target="_self">Surf Africa</a>.”) Beyond the lack of computers and Internet access, I saw labs filled with obsolete technology from the 1960s. If students needed a computer or to get online, they went to an Internet cafe, their out-of-pocket costs a burden on them and their families.</p><p>So is the situation any better 20-plus years on? The short answer is yes. But as computer science professor <a href="https://ibaino.net/" target="_blank">Engineer Bainomugisha</a> and IEEE student member <a href="https://www.linkedin.com/in/oluwatosin-kolade/?originalSubdomain=ng" rel="noopener noreferrer" target="_blank">Oluwatosin Kolade</a> attest in the following pages, there’s still a long way to go.</p><p>Both men are engineers but at different stages of their academic journey: Bainomugisha went to college in the early 2000s and is now a computer science professor at <a href="https://mak.ac.ug/" rel="noopener noreferrer" target="_blank">Makerere University</a> in Kampala, Uganda. Kolade is in his final semester as a mechanical engineering student at <a href="https://oauife.edu.ng/" rel="noopener noreferrer" target="_blank">Obafemi Awolowo University</a> in Ilé-Ifẹ̀, Nigeria. They describe the challenges they face and what they see as the path forward for a continent brimming with aspiring engineers but woefully short on the resources necessary for a robust education.</p><p>—Harry Goldstein</p><p><a href="https://www.researchgate.net/lab/Oluwaseun-K-Ajayi-Lab" rel="noopener noreferrer" target="_blank">Dr. Oluwaseun K. Ajayi</a>, an expert in computer-aided design (CAD), machine design, and mechanisms, gave us the freedom to choose our final project. I proposed a research project based on a paper titled “<em>Advance Simulation Method for Wheel-Terrain Interactions of Space Rovers: A Case Study on the UAE Rashid Rover</em>” by <a href="https://arxiv.org/search/cs?searchtype=author&query=Abubakar,+A" rel="noopener noreferrer" target="_blank">Ahmad Abubakar</a> and coauthors<em><em>.</em></em> But due to the computational resources required, it was rejected. Dr. Ajayi instead proposed that my fellow students and I build a surveillance drone, as it aligned with his own research. Dr. Ajayi, a passionate and driven researcher, was motivated by the potential real-world applications of our project. His constant push for progress, while sometimes overwhelming, was rooted in his desire to see us produce meaningful work.</p><p>As my team finished scoping out the preliminary concepts of the drone in CAD designs, we were ready to contribute money toward implementing our idea. We conducted a cost analysis and decided to use a third-party vendor to help us order our components from China. We went this route due to shipping and customs issues we’d previously experienced. Taking the third-party route was supposed to solve the problem. Little did we suspect what was coming.</p><p>By the time we finalized our cost analysis and started to gather funds, the price of the components we needed had skyrocketed due to a sudden economic crisis and depreciation of the Nigerian naira by 35 percent against the U.S. dollar at the end of January 2024. This was the genesis of our problem.</p><p class="ieee-inbody-related">Related: <a href="https://spectrum.ieee.org/africa-engineering-hardware" target="_blank">Learning More With Less</a></p><p>Initially, we were a group of 12, but due to the high cost per person, Dr. Ajayi asked another group, led by <a href="https://www.linkedin.com/in/nanaweitonbrasuoware/" rel="noopener noreferrer" target="_blank">Tonbra Suoware</a>, to merge with mine. Tonbra’s team had been planning <a href="https://spectrum.ieee.org/african-robotics-network" target="_blank">a robotic arm project</a> until Dr. Ajayi merged our teams and instructed us to work on the drone, with the aim of exhibiting it at the <a href="https://central.nasrda.gov.ng/" rel="noopener noreferrer" target="_blank">National Space Research and Development Agency</a>, in Abuja, Nigeria. The merger increased our group to 25 members, which helped with the individual financial burden but also meant that not everyone would actively participate in the project. Many just contributed their share of the money.</p><p>Tonbra and I drove the project forward.</p><h2>Supply Chain Challenges in African Engineering Education</h2><p>With Dr. Ajayi’s consent, my teammates and I scrapped the “surveillance” part of the drone project and raised the money for developing just the drone, totaling approximately 350,000 naira (approximately US $249). We had to cut down costs, which meant straying away from the original specifications of some of the components, like the flight controller, battery, and power-distribution board. Otherwise, the cost would have been way more unbearable.</p><p>We were set to order the components from China on 5 February 2024. Unfortunately, it was a long holiday in China, we were told, so we wouldn’t get the components until March. This led to tense discussions with Dr. Ajayi, despite having briefed him about the situation. Why the pressure? Our school semester ends in March, and having components arrive in March would mean that the project would be long overdue by the time we finished it. At the same time, we students had a compulsory academic-industrial training at the end of the semester.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Young Black man in plaid shirt sitting on a chair in front of a white board and a black board" class="rm-shortcode" data-rm-shortcode-id="662b14ac6096514656856080049e84b5" data-rm-shortcode-name="rebelmouse-image" id="d6b2d" loading="lazy" src="https://spectrum.ieee.org/media-library/young-black-man-in-plaid-shirt-sitting-on-a-chair-in-front-of-a-white-board-and-a-black-board.png?id=61482574&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Oluwatosin Kolade, a mechanical engineering student at Nigeria’s Obafemi Awolowo University, says the drone project taught him the value of failure.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Andrew Esiebo</small></p><p>But what choice did we have? We couldn’t back down from the project—that would have cost us our grade.</p><p>We got most of our components by mid-March, and immediately started working on the drone. We had the frame 3D-printed at a cost of 50 naira (approximately US $0.03) per gram for a 570-gram frame, for a total cost of 28,500 naira (roughly US $18).</p><p><span>Next, we turned to building the power-distribution system for the electrical components. Initially, we’d planned to use a power-distribution board to evenly distribute power from the battery to the speed controllers and the rotors. However, the board we originally ordered was no longer available. Forced to improvise, we used a </span><a href="https://verotl.com/circuitboards/veroboards" target="_blank">Veroboard </a><span>instead. We connected the battery in a configuration parallel to the speed controllers to ensure that each rotor received equal power. This improvisation did mean additional costs, as we had to rent soldering irons, hand drills, hot glue, cables, a digital multimeter, and other tools from an electronics hub in downtown Ilé-Ifẹ̀.</span></p><p><span></span>Everything was going smoothly until it was time to configure the flight controller—the ArduCopter 2.8 board—with the assistance of a software program called <a href="https://ardupilot.org/planner/" target="_blank">Mission Planner</a>. We toiled daily, combing through YouTube videos, online forums, Stack Exchange, and other resources for guidance, all to no avail. We even downgraded the Mission Planner software a couple of times, only to discover that the board we’d waited for so patiently was obsolete. It was truly heartbreaking, but we couldn’t order another one because we didn’t have time to wait for it to arrive. Plus, getting another flight controller would’ve cost an additional sum—240,000 naira (about US $150) for a <a href="https://www.hawks-work.com/products/pixhawk-2-4-8-flight-control-open-source-px4-autopilot" target="_blank">Pixhawk 2.4.8 flight controller</a>—which we didn’t have.</p><p>We knew our drone would be half-baked without the flight controller. Still, given our semester-ending time constraint, we decided to proceed with the configuration of the transmitter and receiver. We made the final connections and tested the components without the flight controller. To ensure that the transmitter could control all four rotors simultaneously, we tested each rotor individually with each transmitter channel. The goal was to assign a single channel on the transmitter that would activate and synchronize all four rotors, allowing them to spin in unison during flight. This was crucial, because without proper synchronization, the drone would not be able to maintain a stable flight.</p><p class="pull-quote">“This experience taught me invaluable lessons about resilience, teamwork, and the harsh realities of engineering projects done by students in Nigeria.”</p><p>After the final configuration and components testing, we set out to test our drone in its final form. But a few minutes into the testing, our battery failed. This failure meant the project had failed, and we were incredibly disappointed.</p><p>When we finally submitted our project to Dr. Ajayi, the deadline had passed. He told us to charge the battery so he could see the drone come alive, even though it couldn’t fly. But circumstances didn’t allow us to order a battery charger, and we were at a loss as to where to get help with the flight controller and battery. There are no tech hubs available for such things in Ilé-Ifẹ̀. We told Dr. Ajayi we couldn’t do as he’d asked and explained the situation to him. He finally allowed us to submit our work, and all team members received course credit.</p><h2>Resourcefulness is not a substitute for funding</h2><p>This experience taught me invaluable lessons about resilience, teamwork, and the harsh realities of engineering projects done by students in Nigeria. It showed me that while technical knowledge is crucial, the ability to adapt and improvise when faced with unforeseen challenges is just as important. I also learned that failure, though disheartening, is not an ending but a stepping stone toward growth and improvement.</p><p>In my school, the demands on mechanical engineering students are exceptionally high. For instance, in a single semester, I was sometimes assigned up to four different major projects, each from a different professor. Alongside the drone project, I worked on two other substantial projects for other courses. The reality is that a student’s ability to score well in these projects is often heavily dependent on financial resources. We are constantly burdened with the costs of running numerous projects. The country’s ongoing economic challenges, including currency devaluation and inflation, only exacerbate this burden.</p><p>In essence, when the world, including graduate-school-admission committees and industry recruiters, evaluates transcripts from Nigerian engineering graduates, it’s crucial to recognize that a grade may not fully reflect a student’s capabilities in a given course. They can also reflect financial constraints, difficulties in sourcing equipment and materials, and the broader economic environment. This understanding must inform how transcripts are interpreted, as they tell a story not just of academic performance but also of perseverance in the face of significant challenges.</p><p>As I advance in my education, I plan to apply these lessons to future projects, knowing that perseverance and resourcefulness will be key to overcoming obstacles. The failed drone project has also given me a realistic glimpse into the working world, where unexpected setbacks and budget constraints are common. It has prepared me to approach my career with both a practical mindset and an understanding that success often comes from how well you manage difficulties, not just how well you execute plans. <span class="ieee-end-mark"></span></p>]]></description><pubDate>Wed, 20 Aug 2025 13:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/stem-education-in-africa</guid><category>Higher education</category><category>Engineering education</category><category>Undergraduate education</category><category>Makerspaces</category><category>3d printers</category><category>Arduino</category><dc:creator>Oluwatosin Kolade</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/four-young-black-men-at-desks-in-a-lecture-hall.png?id=61482540&width=980"></media:content></item><item><title>Smart Glasses Help Train General-Purpose Robots</title><link>https://spectrum.ieee.org/smart-glasses-robot-training</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/conceptual-collage-of-a-robotic-arm-reaching-down-between-two-circular-photos-of-experiments-meant-to-resemble-eyeglasses.jpg?id=61469620&width=1245&height=700&coordinates=0%2C118%2C0%2C119"/><br/><br/><p>General-purpose robots are hard to train. The dream is to have a <a href="https://spectrum.ieee.org/ai-robots" target="_blank">robot like the Jetson’s Rosie</a> that can<span> </span><span>performing a range of</span><span> household </span><span>tasks, like tidying up or folding laundry. But for that to happen, the robot needs to learn from a </span><a href="https://spectrum.ieee.org/global-robotic-brain" target="_blank">large amount of data</a><span> that match real-world conditions—that data can be difficult to collect. Currently, most training data is collected from multiple static cameras that have to be carefully set up to gather useful information. But what if bots could learn from the everyday interactions we already have with the physical world? </span></p><p>That’s a question that the <a href="https://www.lerrelpinto.com/group" target="_blank">General-purpose Robotics and AI Lab</a> at New York University, led by Assistant Professor <a href="https://www.lerrelpinto.com/#publications" target="_blank">Lerrel Pinto</a>, hopes to answer with <a href="https://egozero-robot.github.io/" target="_blank">EgoZero</a>, a smart-glasses system that aids robot learning by collecting data with a souped-up version of <a href="https://spectrum.ieee.org/meta-ar-glasses-expense" target="_blank">Meta’s glasses</a>. <strong></strong></p><p>In a <a href="https://arxiv.org/abs/2505.20290" target="_blank">recent preprint</a>, which serves as a proof of concept for the approach, the researchers trained a robot to complete seven manipulation tasks, such as picking up a piece of bread and placing it on a nearby plate. For each task, they collected 20 minutes of data from humans performing these tasks while recording their actions with glasses from Meta’s <a href="https://www.projectaria.com/" target="_blank">Project Aria</a>. (These sensor-laden glasses are used exclusively for research purposes.) When then deployed to autonomously complete these tasks with a robot, the system achieved a 70 percent success rate. </p><h2>The Advantage of Egocentric Data</h2><p>The “ego” part of EgoZero refers to the “egocentric” nature of the data, meaning that it is collected from the perspective of the person performing a task. “The camera sort of moves with you,” like how our eyes move with us, says <a href="https://raunaqb.com/" target="_blank">Raunaq Bhirangi</a>, a postdoctoral researcher at the NYU lab. </p><p>This has two main advantages: First, the setup is more portable than external cameras. Second, the glasses are more likely to capture the information needed because wearers will make sure they—and thus the camera—can see what’s needed to perform a task. “For instance, say I had something hooked under a table and I want to unhook it. I would bend down, look at that hook and then unhook it, as opposed to a third-person camera, which is not active,” says Bhirangi. “With this egocentric perspective, you get that information baked into your data for free.”</p><p>The second half of EgoZero’s name refers to the fact that the system is trained without any robot data, which can be costly and difficult to collect; human data alone is enough for the robot to learn a new task. This is enabled by a framework developed by Pinto’s lab that tracks points in space, rather than full images. When training robots on image-based data, “the mismatch is too large between what human hands look like and what robot arms look like,” says Bhirangi. This framework instead tracks points on the hand, which are mapped onto points on the robot. </p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="EgoZero localizes object points via triangulation over the camera trajectory, and computes action points via Aria MPS hand pose and a hand estimation model." class="rm-shortcode" data-rm-shortcode-id="31a1551426b63f5c0788d4cbde80aa11" data-rm-shortcode-name="rebelmouse-image" id="d916e" loading="lazy" src="https://spectrum.ieee.org/media-library/egozero-localizes-object-points-via-triangulation-over-the-camera-trajectory-and-computes-action-points-via-aria-mps-hand-pose.jpg?id=61469639&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The EgoZero system takes data from humans wearing smart glasses and turns it into usable 3D-navigation data for robots to do general manipulation tasks.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://egozero-robot.github.io/" target="_blank">Vincent Liu, Ademi Adeniji, Haotian Zhan, et al.</a></small></p><p>Reducing the image to points in 3D space means the model can track movement the same way, regardless of the specific robotic appendage. “As long as the robot points move relative to the object in the same way that the human points move, we’re good,” says Bhirangi.</p><p>All of this leads to a generalizable model that would otherwise require a lot of diverse robot data to train. If the robot was trained on data picking up one piece of bread—say, a deli roll—it can generalize that information to pick up a piece of ciabatta in a new environment. </p><h2>A Scalable Solution</h2><p>In addition to EgoZero, the research group is working on several projects to help make general-purpose robots a reality, including open-source robot designs, flexible <a href="https://arxiv.org/abs/2409.08276" target="_blank">touch sensors</a>, and additional methods of collecting real-world training data. </p><p>For example, as an alternative to EgoZero, the researchers have also designed a setup with a 3D-printed handheld gripper that more closely resembles most robot “hands.” A smartphone attached to the gripper captures video with the same point-space method that’s used in EgoZero. The team, by having people collect data without bringing a robot into their homes, provide two approaches that could be more scalable for collecting training data.</p><p>That scalability is ultimately the researcher’s goal. Large language models can harness the entire Internet, but there is no Internet equivalent for the physical world. Tapping into everyday interactions with smart glasses could help fill that gap.</p>]]></description><pubDate>Tue, 19 Aug 2025 14:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/smart-glasses-robot-training</guid><category>Training data</category><category>Smart glasses</category><category>Robotics</category><category>Meta</category><dc:creator>Gwendolyn Rak</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/conceptual-collage-of-a-robotic-arm-reaching-down-between-two-circular-photos-of-experiments-meant-to-resemble-eyeglasses.jpg?id=61469620&width=980"></media:content></item><item><title>Video Friday: SCUTTLE</title><link>https://spectrum.ieee.org/video-friday-scuttle-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/black-robotic-snake-navigates-rocky-terrain-in-bright-sunlight.jpg?id=61467306&width=1245&height=700&coordinates=0%2C179%2C0%2C180"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="http://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="uch3yx-mjta"><em>Check out our latest innovations on SCUTTLE, advancing multilegged mobility anywhere.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a0aae165324d5bb802ad7b69289e1a17" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/uCh3Yx-MjtA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://groundcontrolrobotics.com/">GCR</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="hoornv3la0k">That laundry-folding robot we’ve been working on for 15 years is still not here.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6913d1c039c12817ac9b44f8cf2a843e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/HOoRnv3lA0k?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>Honestly I think <a data-linked-post="2668901591" href="https://spectrum.ieee.org/figure-new-humanoid-robot" target="_blank">Figure</a> could learn a few tricks from vintage UC Berkeley PR2, though.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="5359d4bdf7e1c7fd0b5dace2d9fe6eb3" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/gy5g33S0Gzo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">- YouTube</small> </p><p>[ <a href="https://www.figure.ai/">Figure</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="yznay63wcdw">Tensegrity robots are so cool, but so hard—it’s good to see progress.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bb4273eaa4a6ac51826ebce722ef0f8d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/YznAy63wcdw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.umich.edu/news/2025/advanced-actuator-tensegrity-robot/">Michigan Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="dy6ysug9f00">We should find out next week how quick this is.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="39f525f7e35533b9604c65c4cfa35c60" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dY6YSUG9F00?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.unitree.com/">Unitree</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="v6w_dtkwvtc"><em>We introduce a methodology for task-specific design optimization of multirotor Micro Aerial Vehicles. By leveraging reinforcement learning, Bayesian optimization, and covariance matrix adaptation evolution strategy, we optimize aerial robot designs guided only by their closed-loop performance in a considered task. Our approach systematically explores the design space of motor pose configurations while ensuring manufacturability constraints and minimal aerodynamic interference. Results demonstrate that optimized designs achieve superior performance compared to conventional multirotor configurations in agile waypoint navigation tasks, including against fully actuated designs from the literature. We build and test one of the optimized designs in the real world to validate the sim2real transferability of our approach.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b08e05f9d393dff055a3825f45335718" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/V6w_DTKWvtc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.autonomousrobotslab.com/">ARL</a> ]</p><p>Thanks, Kostas!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="5gpphya6tkc">I guess legs are required for this inspection application because of the stairs right at the beginning? But sometimes, that’s how the world is.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4614b3061ab10a13ec6c023f7f79f6b8" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/5GPphya6tkc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.deeprobotics.cn/en">DEEP Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="lu-coco8xzy"><em>The Institute of Robotics and Mechatronics at DLR has a long tradition in developing multifingered hands, creating novel mechatronic concepts as well as autonomous grasping and manipulation capabilities. The range of hands spans from Rotex, a first two-fingered gripper for space applications, to the highly anthropomorphic Awiwi Hand and variable stiffness end effectors. This video summarizes the developments of DLR in this field over the past 30 years, starting with the Rotex experiment in 1993.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9b730c1c696e64c9a91c8e4a809a1554" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/lu-CoCO8xZY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dlr.de/en/rm">DLR RM</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="sgf0nkx8t9a"><em>The quest for agile quadrupedal robots is limited by handcrafted reward design in reinforcement learning. While animal motion capture provides 3D references, its cost prohibits scaling. We address this with a novel video-based framework. The proposed framework significantly advances robotic locomotion capabilities.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7c7d4ff76c677d6b936f4ce3a4bd4067" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/SGf0Nkx8t9A?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arclab.hku.hk/">Arc Lab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="la1dh0smkkm">Serious question: Why don’t <a data-linked-post="2666662286" href="https://spectrum.ieee.org/humanoid-robots" target="_blank">humanoid robots</a> sit down more often?</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ae30d15a26a701f11b24b3ec2ddfe804" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/la1dh0SmkkM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.engineai.com.cn/">EngineAI</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="mcqixha8ykg">And now, this.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9beea64e285ec85616820247546e13e4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/MCqIxHA8YKg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="0kb6fz2kjt8"><em>NASA researchers are currently using wind tunnel and flight tests to gather data on an electric vertical takeoff and landing (eVTOL) scaled-down small aircraft that resembles an air taxi that aircraft manufacturers can use for their own designs. By using a smaller version of a full-sized aircraft called the RAVEN Subscale Wind Tunnel and Flight Test (RAVEN SWFT) vehicle, NASA is able to conduct its tests in a fast and cost-effective manner. </em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="696c6da4358a4efdd5289afd0142bcc6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/0KB6FZ2kjT8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.nasa.gov/aeronautics/air-taxi-flight-controls/">NASA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8otwr-s6qus"><em>This video details the advances in orbital manipulation made by DLR’s Robotic and Mechatronics Center over the past 30 years, paving the way for the development of robotic technology for space sustainability.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="54f8939f53a513bd6f528681ca5eae2e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8OtwR-S6QUs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dlr.de/en/rm">DLR RM</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="bmfpvcu16sq"><em>This summer, a team of robots explored a simulated Martian landscape in Germany, remotely guided by an astronaut aboard the International Space Station. This marked the fourth and final session of the Surface Avatar experiment, a collaboration between ESA and the German Aerospace Center (DLR) to develop how astronauts can control robotic teams to perform complex tasks on the Moon and Mars.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5c6b999b15ede0cf1ccd7e925d754de8" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/BMFPVCu16SQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://blogs.esa.int/exploration/human-minds-robotic-hands/">ESA</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 15 Aug 2025 16:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-scuttle-robot</guid><category>Video friday</category><category>Robotics</category><category>Crawler</category><category>Humanoid robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/black-robotic-snake-navigates-rocky-terrain-in-bright-sunlight.jpg?id=61467306&width=980"></media:content></item><item><title>Designing for Functional Safety: A Developer's Introduction</title><link>https://events.bizzabo.com/749990?utm_source=Wiley&utm_medium=Spectrum</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.png?id=61453640&width=980"/><br/><br/><p>Welcome to your essential guide to <span>functional safety, tailored specifically for product developers. In a world where technology is increasingly integrated into every aspect of our lives—from industrial robots to autonomous vehicles—the potential for harm from product malfunctions makes functional safety not just important, but critical. </span></p><p>This webinar cuts through the complexity to provide a clear understanding of what functional safety truly entails and why it’s critical for product success. We’ll start by defining functional safety not by its often-confusing official terms, but as a structured methodology for managing risk through defined engineering processes, essential product design requirements, and probabilistic analysis. The “north star” goals? To ensure your <span>product not only works reliably but, if it does fail, it does so in a safe and predictable manner.</span></p><p>We’ll dive into two fundamental concepts: the <span>Safety Lifecycle, a detailed engineering process focused on design quality to minimize systematic failures, and Probabilistic, Performance-Based Design using reliability metrics to minimize random hardware failures. You’ll learn about IEC 61508, the foundational standard for functional safety, and how numerous industry-specific standards derive from it.</span></p><p>The webinar will walk you through the Engineering Design phases: analyzing hazards and required <span>risk reduction, realizing optimal designs, and ensuring safe operation. We’ll demystify the Performance Concept and the critical Safety Integrity Level (SIL), explaining its definition, criteria (systematic capability, architectural constraints, PFD), and how it relates to industry-specific priorities.</span></p><p>Discover key Design Verification techniques like DFMEA/DDMA and FMEDA, emphasizing how these tools help identify and address problems early in development. We’ll detail the <span>FMEDA technique showing how design decisions directly impact predictions like safe and dangerous failure rates, diagnostic coverage, and useful life. Finally, we’ll cover Functional Safety Certification, explaining its purpose, process, and what adjustments to your development process can set you up for success.</span></p><p><span><span><a href="https://events.bizzabo.com/749990?utm_source=Wiley&utm_medium=Spectrum" target="_blank">Register now for this free webinar!</a></span></span></p>]]></description><pubDate>Fri, 15 Aug 2025 11:52:07 +0000</pubDate><guid>https://events.bizzabo.com/749990?utm_source=Wiley&utm_medium=Spectrum</guid><category>Functional safety</category><category>Risk management</category><category>Technology integration</category><category>Type:webinar</category><dc:creator>exida</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/61453640/origin.png"></media:content></item><item><title>Bug-size Bots Get More Nimble With Flexible Actuators</title><link>https://spectrum.ieee.org/soft-robot-actuators-bugs</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-cartoon-spider-stands-on-a-sandy-surface-next-to-a-small-bug-sized-flexible-robot.jpg?id=61443734&width=1245&height=700&coordinates=0%2C310%2C0%2C311"/><br/><br/><p>Small, autonomous robots that can access cramped environments could help with future search-and-rescue operations and inspecting infrastructure details that are difficult to access by people or larger bots. <strong></strong>However, the conventional, rigid motors that many robots rely on are difficult to miniaturize to these scales, because they easily break when made smaller or can no longer overcome friction forces.</p><p>Now, researchers have developed a muscle-inspired elasto-electromagnetic system to build insect-size <a data-linked-post="2660256392" href="https://spectrum.ieee.org/soft-robotics" target="_blank">“soft” robots</a> made of flexible materials.<strong> </strong>“It became clear that existing soft robotic systems at this scale still lack actuation mechanisms that are both efficient and autonomous,” says <a href="https://en.westlake.edu.cn/faculty/hanqing-jiang.html" target="_blank">Hanqing Jiang</a>, a professor of mechanical engineering at Westlake University in Hangzhou, China. Instead, they “often require harsh stimuli such as high voltage, strong external fields, or intense light that hinder their real-world deployment.”</p><p>Muscles function similarly to actuators, where body parts move through the contraction and relaxation of muscle fibers. When connected to the rest of the body, the brain and other electrical systems in the body allow animals to make a range of movements, including movement patterns that generate disproportionately large forces relative to their body mass.</p><h2>Muscle-Inspired Actuator Technology</h2><p>The new actuator is made of a flexible silicone polymer called polydimethylsiloxane, a <a data-linked-post="2657538742" href="https://spectrum.ieee.org/the-men-who-made-the-magnet-that-made-the-modern-world" target="_blank">neodymium magnet</a>, and an electrical coil intertwined with soft magnetic iron spheres. The researchers fabricated the actuators using a 2D molding process that can manufacture them at millimeter, centimeter, and decimeter scales. It is also scalable for larger, more powerful soft devices. <span>“We shifted focus from material response to structural design in soft materials and combined it with static magnetic forces to create a novel actuation mechanism,” says Jiang. The researchers published their work in <em><a href="https://www.nature.com/articles/s41467-025-62182-2#Abs1" target="_blank">Nature Communications</a>.</em></span></p><p>The new actuator is able to contract like a muscle using a balance between elastic and magnetic forces. When the actuator contracts,<strong> </strong><span>it generates an electrical current to create a Lorentz force between the electrical coil and the neodymium magnet. The actuator then deforms as the iron spheres respond to the increased force, which can be used to provide movement for the robot itself.</span> The flexible polymer ensures that the system can both deform and recover back to its original state when the current is no longer applied.<strong></strong></p><p>The system tested by the researchers achieved an output force of 210 newtons per kilogram, a low operational voltage below 4 volts, and is powered by onboard batteries. It can also undergo large deformations, up to a 60 percent contraction ratio. The researchers made it more energy efficient by not requiring continuous power to maintain a stable state when the actuator isn’t moving—a technique similar to how mollusks stay in place using their catch muscles, which can maintain high tension over long periods of time by latching together thick and thin muscle filaments to conserve energy.</p><h2>Autonomous Insect-Size Soft Robots</h2><p>The researchers used the actuators to develop a series of insect-size soft robots that could exhibit autonomous adaptive crawling, swimming, and jumping movements in a range of environments.</p><p>One such series of bug-size bots was a group of compact soft inchworm crawlers, just 16-by-10-by-10 millimeters in size and weighing only 1.8 grams. The robots were equipped with a translational joint, a 3.7-volt (30-milliampere-hour) lithium-ion battery, and an integrated control circuit. This setup enabled the robots to crawl using sequential contractions and relaxation—much like a caterpillar. <span>Despite its small size, the crawler exhibited an output force of 0.41 N, which is 8 to 45 times as powerful as existing insect-scale soft crawler robots. </span></p><p><span>This output force enabled the robot to traverse difficult-to-navigate terrain—including soil, rough stone, PVC, glass, wood, and inclines between 5 and 15 degrees—while keeping a consistent speed. </span><span>The bug bots were also found to be very resilient to impacts and falling. They suffered no damage and continued to work even after a 30-meter drop off the side of a building.</span></p><p><span></span><span>The researchers also developed 14-by-20-by-19-mm legged crawlers, weighing 1.9 g with an output force of 0.48 N, that crawled like inchworms. These used rotational elasto-electromagnetic joints to move the legs backward and forward and weighed just 1.9 g. The researchers also built a</span><span> 19-by-19-by-11-mm swimming robot that weighed 2.2 g with an output force of 0.43 N.</span></p><p><span>Alongside testing how the bots move on different surfaces, the researchers built a number of obstacle courses for them to navigate while performing sensing operations. The inchworm bot was put into an obstacle course featuring narrow and complex paths and used a humidity sensor to detect sources of moisture. The swimming bots were tested in both the lab and a river. A course was built in the lab, where the swimmer had to perform chemical sensing operations in a narrow chamber using an integrated miniature ethanol gas detector.</span></p><p><span>Jiang says the researchers are now looking at developing sensor-rich robotic swarms capable of distributed detection, decision-making, and collective behavior. “By coordinating many small robots, we aim to create systems that can cover wide areas, adapt to dynamic environments, and respond more intelligently to complex tasks.”</span></p><p>Jiang says they’re also looking into flying and other swimming movements enabled by the elasto-electromagnetic system,<strong> </strong><span>including a jellyfish-like soft robot for deep-sea exploration and marine research.</span></p>]]></description><pubDate>Tue, 12 Aug 2025 12:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/soft-robot-actuators-bugs</guid><category>Actuators</category><category>Robotics</category><category>Soft robots</category><category>Insect robots</category><dc:creator>Liam Critchley</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-cartoon-spider-stands-on-a-sandy-surface-next-to-a-small-bug-sized-flexible-robot.jpg?id=61443734&width=980"></media:content></item><item><title>Video Friday: Unitree’s A2 Quadruped Goes Exploring</title><link>https://spectrum.ieee.org/video-friday-exploration-robots</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robotic-dog-running-in-an-illuminated-arched-hallway-at-night-and-smashing-through-a-pane-of-glass.gif?id=61441393&width=1245&height=700&coordinates=0%2C31%2C0%2C32"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://www.whrgoc.com/">World Humanoid Robot Games</a>: 15–17 August 2025, BEIJING</h5><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="ve9usu7zplu"><em>The A2 sets a new standard in quadruped robots, balancing endurance, strength, speed, and perception.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0ca7f368fe4e99ec59083a72363fa988" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ve9USu7zpLU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>The A2 weighs 37 kg (81.6 lbs) unloaded. Fully loaded with a 25 kg (55 lb) payload, it can continuously walk for 3 hours or approximately 12.5 km. Unloaded, it can continuously walk for 5 hours or approximately 20 km. Hot-swappable dual batteries enable seamless battery swap and continuous runtime for any mission.</em></blockquote><p>[ <a href="https://www.unitree.com/A2">Unitree</a> ]</p><p>Thanks, William!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="bnyxqwc9qfs"><em>ABB is working with Cosmic Buildings to reshape how communities rebuild and transform construction after disaster. In response to the 2025 Southern California wildfires, Cosmic Buildings are deploying mobile robotic microfactories to build modular homes on-site—cutting construction time by 70% and costs by 30%.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="dfaf34433a46424936827c5bc7becaee" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/BnYXQwC9QFs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://new.abb.com/news/detail/128070/abb-and-cosmic-use-ai-powered-robots-to-rebuild-homes-in-los-angeles-area">ABB</a> ]</p><p>Thanks, Caitlin!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="icxuq4tf4fy">How many slightly awkward engineers can your <a data-linked-post="2666662286" href="https://spectrum.ieee.org/humanoid-robots" target="_blank">humanoid robot</a> pull?</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9f18c2671a3b7299ac2359b4fb765c87" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/IcXuQ4tF4FY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.magiclab.top/">MagicLab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="u4d6v3ohsoi">The physical robot hand does some nifty stuff at about 1 minute in.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0e3acbdf8660304575fa0dc3d16fba39" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/u4d6v3ohsOI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://srl.ethz.ch/">ETH Zurich Soft Robotics Lab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="rfspnvjdacq">Biologists, you can all go home now.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3acdbef0397f4c122fe0d2ae33c0d51e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/rFSpNVJDAcQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://global.agilex.ai/products/piper">AgileX</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="zx0qg1gxrji">The <a data-linked-post="2661714397" href="https://spectrum.ieee.org/robocup-robot-soccer" target="_blank">World Humanoid Robot Games</a> start next week in Beijing, and of course Tech United Eindhoven are there.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5560b9f92962f0654e3b0d996b4cdd26" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ZX0Qg1GXRjI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://techunited.nl/?page_id=2135&lang=en">Tech United</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="bxfheeecjfy"><em>Our USX-1 Defiant is a new kind of <a data-linked-post="2673356350" href="https://spectrum.ieee.org/video-friday-robot-metabolism" target="_blank">autonomous maritime platform</a>, with the potential to transform the way we design and build ships. As the team prepares Defiant for an extended at-sea demonstration, program manager Greg Avicola shares the foundational thinking behind the breakthrough vessel.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="17451c23a36e7c4972e3dc77d8252775" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/BxFhEEeCjFY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.darpa.mil/research/programs/no-manning-required-ship">DARPA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="_dnhdeqmf-4"><em>After loss, how do you translate grief into creation? Meditation Upon Death is Paul Kirby’s most personal and profound painting—a journey through love, loss, and the mystery of the afterlife. Inspired by a conversation with a Native American shaman and years of artistic exploration, Paul fuses technology and traditional art to capture the spirit’s passage beyond. With 5,796 brushstrokes, a custom-built robotic painting system, and a vision shaped by memory and devotion, this is the most important painting he has ever made.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="fb1c23e12d63fa341e08156aaf11836f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/_DNhdeqMf-4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://thefusepathway.com/studio/robotic-art/">Dulcinea</a> ]</p><p>Thanks, Alexandra!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="oz0lizn3umk"><em>In the fourth installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Andrew Ng, the founder of Google Brain and DeepLearning.AI, for a conversation about the history of neural network research and how Andrew’s pioneering ideas led to some of the biggest breakthroughs in modern-day AI.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3332c471e624c98c00bbdc5c80fca61e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Oz0LizN3uMk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://x.company/moonshotpodcast/">Moonshot Podcast</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 08 Aug 2025 15:30:04 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-exploration-robots</guid><category>Video friday</category><category>Robotics</category><category>Quadruped robots</category><category>Factory robots</category><category>Humanoid robots</category><category>Dexterity</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/robotic-dog-running-in-an-illuminated-arched-hallway-at-night-and-smashing-through-a-pane-of-glass.gif?id=61441393&width=980"></media:content></item><item><title>Video Friday: Dance With CHILD</title><link>https://spectrum.ieee.org/video-friday-child-humanoid-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/man-controls-humanoid-robot-with-arm-gestures-wearing-harnessed-remote-control-device.png?id=61417799&width=1245&height=700&coordinates=0%2C256%2C0%2C257"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/" target="_blank">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="66_oit-mic0"><em>Many parents naturally teach motions to their child while using a baby carrier. In this setting, the parent’s range of motion fully encompasses the child’s, making it intuitive to scale down motions in a puppeteering manner. This inspired UIUC KIMLAB to build CHILD: Controller for Humanoid Imitation and Live Demonstration.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a2d9d8f8030812b823d7d0f055374f1b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/66_OIT-mic0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>The role of <a data-linked-post="2650233002" href="https://spectrum.ieee.org/video-friday-sarcos-guardian-xt-teleoperated-dexterous-robot" target="_blank">teleoperation</a> has grown increasingly important with the rising interest in collecting physical data in the era of Physical/Embodied AI. We demonstrate the capabilities of CHILD through loco-manipulation and full-body control experiments using the Unitree G1 and other PAPRAS dual-arm systems. To promote accessibility and reproducibility, we open-source the hardware design.</em></blockquote><p>[ <a href="https://uiuckimlab.github.io/CHILD-pages/">KIMLAB</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="v1q4su54iho">This costs less than US $6,000.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0ada267e6748cd47a1a5d4892fd16d80" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/v1Q4Su54iho?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.unitree.com/R1">Unitree</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="wlftez-qf1e">If I wasn’t sold on one of these little Reachy Minis before. I definitely am now.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="af1e51a7a4da99ad62c0ae498b46eb85" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/wLftEz-QF1E?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.pollen-robotics.com/reachy-mini/">Pollen</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="5ttcqrra4um"><em>In this study, we propose a falconry-like interaction system in which a <a data-linked-post="2650280287" href="https://spectrum.ieee.org/high-performance-ornithopter-drone" target="_blank">flapping-wing drone</a> performs autonomous palm-landing motion on a human hand. To achieve a safe approach toward humans, our motion planning method considers both physical and psychological factors.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="eb745b1bb23fb9132d47cad87279ffd9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/5TtCqRrA4UM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I should point out that palm landings are not falconry-like at all, and that if you’re doing falconry right, the bird should be landing on your wrist instead. I have other hobbies besides robots, you know!</p><p>[ <a href="https://arxiv.org/pdf/2507.17144">Paper</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="7r8ad4o_im4">I’m not sure that augmented reality is good for all that much, but I do like this use case of interactive robot help.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="23f7a7ccbf0527830eb46bca6625bcd6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/7R8Ad4o_IM4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mertcookimg.github.io/mrhad/">MRHaD</a> ]</p><p>Thanks, Masato!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="ms5mn2zhp4e"><em>LimX Dynamics officially launched its general-purpose full-size humanoid robot LimX Oli. It’s currently available only in Mainland China. A global version is coming soon.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="20eb02f2d67ab771f07870f4ab764125" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Ms5Mn2zHp4E?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>Standing at 165 cm and equipped with 31 active degrees of freedom (excluding end-effectors), LimX Oli adopts a general-purpose humanoid configuration with modular hardware-software architecture and is supported by a development tool chain. It is built to advance embodied AI development from algorithm research to real-world deployment.</em></blockquote><p>[ <a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]</p><p>Thanks, Jinyan!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="mzvo2sfq6su"><em>Meet Treadward – the newest robot from HEBI Robotics, purpose-built for rugged terrain, inspection missions, and real-world fieldwork. Treadward combines high mobility with extreme durability, making it ideal for challenging environments like waterlogged infrastructure, disaster zones, and construction sites. With a compact footprint and treaded base, it can climb over debris, traverse uneven ground, and carry substantial payloads.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="87b3b4cb069291f1eb4eebb4a473216d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/MZvO2Sfq6sU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.hebirobotics.com/robotic-mobile-platforms">HEBI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="7rymfdfhnge"><em>PNDbotics made a stunning debut at the 2025 World Artificial Intelligence Conference (WAIC) with the first-ever joint appearance of its full-sized humanoid robot Adam and its intelligent data-collection counterpart Adam-U.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b231bad3f1ac13e40bf1f811ab5eb0fb" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/7RymFDfhNgE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://pndbotics.com/">PNDbotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gi7ftpr-gve"><em>This paper presents the design, development, and validation of a fully autonomous dual-arm aerial robot capable of mapping, localizing, planning, and grasping parcels in an intra-logistics scenario. The aerial robot is intended to operate in a scenario comprising several supply points, delivery points, parcels with tags, and obstacles, generating the mission plan from voice the commands given by the user.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ba057e1be857ac37476bd5badde66959" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/gI7FTPr-gVE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://grvc.us.es/">GRVC</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pbu2cwvqixs"><em>We left the room. They took over. No humans. No instructions. Just robots...moving, coordinating, showing off. It almost felt like…they were staging something.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="28275873cd7ab5600b2c4b0767f2bde4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pbU2cWVqixs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://global.agilex.ai/">AgileX</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="seiukshccic"><em>TRI’s internship program offers a unique opportunity to work closely with our researchers on technologies to improve the quality of life for individuals and society. Here’s a glimpse into that experience from some of our 2025 interns!</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6f3efbec36e37ad97011ed4b1fb39077" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/SEIukShccic?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.tri.global/careers">TRI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="mjhesgakx1w"><em>In the third installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots Astro Teller sits down with Dr. Catie Cuan, robot choreographer and former artist in residence at Everyday Robots, for a conversation about how dance can be used to build beautiful and useful robots that people want to be around.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0d587bff8c111a84b5438ea0b767c296" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/MJhesGakX1w?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.youtube.com/playlist?list=PL7og_3Jqea4U6VgjOfaCGnqp6AiuVfgrU">Moonshot Podcast</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 01 Aug 2025 17:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-child-humanoid-robot</guid><category>Aerial robot</category><category>Humanoid robot</category><category>Robotics events</category><category>Video friday</category><category>Robotics</category><category>Humanoid robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/man-controls-humanoid-robot-with-arm-gestures-wearing-harnessed-remote-control-device.png?id=61417799&width=980"></media:content></item><item><title>Robots That Learn to Fear Like Humans Survive Better</title><link>https://spectrum.ieee.org/robot-risk-assessment-fear</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/silhouette-of-a-human-head-with-circuitry-pattern-extending-from-the-brain.jpg?id=61268049&width=1245&height=700&coordinates=0%2C412%2C0%2C413"/><br/><br/><p><span><br/><em></em></span></p><p>
<em>This article is part of our exclusive <a href="https://spectrum.ieee.org/collections/journal-watch/" target="_blank">IEEE Journal Watch series</a> in partnership with IEEE Xplore.</em>
</p><p><span>Imagine walking downtown when you hear a loud bang coming from the construction site across the street—you may have the impulse to freeze or even duck down. This type of quick, instinctual reaction is one of the most basic but important evolutionary processes we have to protect ourselves and survive in unfamiliar settings.</span></p><p><span>Now, researchers are beginning to explore how a similar, fast-reacting thought process can be translated into robots. The idea is to <a data-linked-post="2650274303" href="https://spectrum.ieee.org/how-to-build-a-moral-robot" target="_blank">program robots to make decisions</a> the same way that humans do, based on our innate emotional responses to unknown stimuli—and in particular our fear response. The </span><a href="https://ieeexplore.ieee.org/document/11054284" target="_blank"><span>results</span></a><span>, published 27 June in </span><a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7083369" target="_blank"><span><em>IEEE Robotics and Automation Letters</em></span></a><span>, show that the approach can significantly enhance robots’ ability to assess risk and avoid dangerous situations.</span></p> <p><a href="https://www.polito.it/personale?p=039906" target="_blank">Alessandro Rizzo</a>, <span>an associate professor in automation engineering and robotics at the <a href="https://www.polito.it/" target="_blank">Polytechnic University of Turin</a> in Italy, led the study. He notes that robots currently face many challenges in adapting to dynamic environments while enacting self-preserving strategies. This is in large part because their control systems are often designed to accomplish very specific tasks. “As a result, robots may struggle to operate effectively in complex and changing conditions,” Rizzo says.</span></p><h2>How the Human Brain Responds to Risk</h2><p>Humans, on the other hand, are able to respond to many different and unique stimuli that we encounter. It’s theorized that our brains have two different ways to calculate, assess, and respond to risk in these scenarios. </p><p>The first involves a very innate response where we detect external stimuli (for example, a loud bang from a construction site) and our brains make very quick, emotional decisions (such as to freeze or duck). In a way, our brains are swiftly responding to raw data in these scenarios, rather than taking the time to more thoroughly process it. </p><p>According to a theory on how our brains work, called the dual-pathway hypothesis, this reaction is elicited by the “<a href="https://www.numberanalytics.com/blog/amygdala-emotional-regulation-mental-health" target="_blank">low road</a>,” neural circuitry responsible for emotions, driven by the amygdala. But when our brains instead use experience and more articulated reasoning involving our prefrontal cortex, this is the second, “high road” pathway to respond to stimuli.</p><p>Rizzo and a doctoral candidate in his lab, <a href="https://www.polito.it/en/staff?p=andrea.usai" target="_blank">Andrea Usai</a>, were curious to see how these two different approaches for confronting risky situations would play out in robots that have to navigate unfamiliar environments. They began by designing a control system for robots that emulates a fear response via the low road.</p><p>“We focused on fear, as it is one of the most studied emotions in neuroscience and, in our view, the one with the greatest potential for robotics,” says Usai. “Fear is closely related to self-preservation and rapid responses to danger, both of which are critical for adaptive behavior.”</p><h2>Reinforced Learning in Robotics</h2><p>To emulate the fear response in their robot, the researchers designed a controller based on reinforced learning, which helps the robot dynamically adjust its priorities and constraints in real time based on raw data of its surroundings. These results inform the behavior of a second algorithm called a nonlinear model predictive controller, which sets a corresponding motor pattern to the robot’s locomotion. </p><p>Through simulations, Rizzo and Usai tested how their robot navigates unfamiliar environments, comparing it to other robot control systems without the fear element. The simulations involved different scenarios, with various dangerous and nondangerous obstacles, which are either static or moving around the simulated environment. </p><p>The results show that the robot with the low-road programming was able to navigate a smoother and safer path toward its goal compared to conventional robot designs. For example, in one of the scenarios with hazards dynamically moving around, the low-road robot navigated around dangerous objects with a wider berth of about 3.1 meters, whereas the other two conventional robots tested in this study came within a harrowing 0.3 and 0.8 meters of dangerous objects. </p><p>Usai says there are many different scenarios where this low-road approach to robotics could be useful, including cases of object manipulation, surveillance, and rescue operations, where robots must deal with hazardous conditions and may need to adopt more cautious behavior. </p><p>But as Usai notes, the low-road approach is very reactive in nature and is better suited for very quick decisions that are needed in the short term. Therefore, the research team is working on a control design that mimics the high road that, while complementing the low road, could help robots make more rational, long-term decisions. </p><p>The researchers are considering doing this using <a data-linked-post="2666621738" href="https://spectrum.ieee.org/how-ai-can-personalize-education" target="_blank">multimodal large language models</a>, like ChatGPT. As Rizzo explains, “These models could help simulate some of the core functions of the human prefrontal cortex, such as decision-making, strategic planning, and context evaluation, allowing us to emulate more cognitively driven responses in robots.” </p><p>“Looking ahead, it would also be interesting trying to extend the architecture to incorporate multiple emotions,” Rizzo adds, “enabling a richer and more nuanced form of adaptive behavior in robotic systems.”</p>]]></description><pubDate>Sat, 26 Jul 2025 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/robot-risk-assessment-fear</guid><category>Robotics</category><category>Risk assessment</category><category>Surveillance</category><category>Journal watch</category><dc:creator>Michelle Hampson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/silhouette-of-a-human-head-with-circuitry-pattern-extending-from-the-brain.jpg?id=61268049&width=980"></media:content></item><item><title>Video Friday: Skyfall Takes on Mars With Swarm Helicopter Concept</title><link>https://spectrum.ieee.org/video-friday-skyfall-mars-helicopter</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/artist-s-concept-of-a-drone-deployment-system-on-mars-6-propellers-connected-by-latticed-scaffolding-and-a-protective-shell-abo.png?id=61322340&width=1245&height=700&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="gqaupq3_xrs"><em>AeroVironment revealed Skyfall—a potential future mission concept for next-generation Mars Helicopters developed with NASA’s Jet Propulsion Laboratory (JPL) to help pave the way for human landing on Mars through autonomous aerial exploration. <br/><br/>The concept is heavily focused on rapidly delivering an affordable, technically mature solution for expanded Mars exploration that would be ready for launch by 2028. Skyfall is designed to deploy six scout helicopters on Mars, where they would explore many of the sites selected by NASA and industry as top candidate landing sites for America’s first Martian astronauts. While exploring the region, each helicopter can operate independently, beaming high-resolution surface imaging and subsurface radar data back to Earth for analysis, helping ensure crewed vehicles make safe landings at areas with maximum amounts of water, ice, and other resources.<br/><br/></em><em>The concept would be the first to use the “Skyfall Maneuver”—an innovative entry, descent, and landing technique whereby the six rotorcraft deploy from their entry capsule during its descent through the Martian atmosphere. By flying the helicopters down to the Mars surface under their own power, Skyfall would eliminate the necessity for a landing platform–traditionally one of the most expensive, complex, and risky elements of any Mars mission.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3eecc27ea351b4023bfcb11dca679a50" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GqAuPq3_XRs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.avinc.com/resources/press-releases/view/av-reveals-skyfall-future-concept-next-gen-mars-helicopters-for-exploration-and-human-landing-preparation">AeroVironment</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="g6ychxkn5nk">By far the best part of videos like these is watching the expressions on the faces of the <a data-linked-post="2650275991" href="https://spectrum.ieee.org/students-race-driverless-cars-in-germany-in-formula-student-competition" target="_blank">students</a> when their robots succeed at something.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="01c5b4fc9378281f6acaf253235b4033" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/g6YchXkN5Nk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://railab.kaist.ac.kr/">RaiLab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="siuwjwkjdgs">This is just a rendering of course, but the real thing should be showing up on 6 August.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="8ab996f34e2be8506e2779ee52a20dcd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/siuwJWkjDgs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.fftai.com/">Fourier</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="v_uakh6swyc"><em>Top performer in its class! Less than two weeks after its last release, MagicLab unveils another breakthrough — MagicDog-W, the wheeled quadruped robot. Cyber-flex, dominate all terrains!</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="766663aff3271ed07e42bb046c5c1c14" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/V_uakh6swYc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.magiclab.top/dog">MagicLab</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="zahmhge_f-m"><em>Inspired by the octopus’s remarkable ability to wrap and grip with precision, this study introduces a vacuum-driven, origami-inspired <a data-linked-post="2650272770" href="https://spectrum.ieee.org/soft-actuators-go-from-squishy-to-stiff-and-back-again" target="_blank">soft actuator</a> that mimics such versatility through self-folding design and high bending angles. Its crease-free, 3D-printable structure enables compact, modular robotics with enhanced grasping force—ideal for handling objects of various shapes and sizes using octopus-like suction synergy.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="79e130248d6b4288ad9a3427e54c957a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ZAHmhGE_f-M?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://ieeexplore.ieee.org/document/11079234">Paper</a> ] via [ <a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8860" target="_blank">IEEE Transactions on Robots</a> ]</p><p>Thanks, Bram!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="k5x8potn0ii">Is it a plane? Is it a helicopter? Yes.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3dba77c2eb51d033a9c65a69d9bdbe57" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/K5X8Potn0iI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://ris.bme.cityu.edu.hk/">Robotics and Intelligent Systems Laboratory, City University of Hong Kong</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="vhnc3y_ykks">You don’t need wrist rotation as long as you have the right <a data-linked-post="2667175149" href="https://spectrum.ieee.org/flexiv-gecko-gripper" target="_blank">gripper</a>.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="49ea0eeb83b1023f844b0e9fc17428fa" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/vHNc3Y_YKks?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.nature.com/articles/s42256-025-01039-1">Nature Machine Intelligence</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="dmpa5mqlqws">ICRA 2026 will be in Vienna next June!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="345e2b957d0777d1ef0e5ae00b9cdf86" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dMPA5MQlQws?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://2026.ieee-icra.org/">ICRA 2026</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="fnv2bwecs9u">Boing, boing, boing!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c036f1c61e9567a273ad3299b53e0847" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Fnv2BweCs9U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://ris.bme.cityu.edu.hk/">Robotics and Intelligent Systems Laboratory, City University of Hong Kong</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="muu3bqo9rki"><em>ROBOTERA Unveils L7: Next-Generation Full-Size Bipedal Humanoid Robot with Powerful Mobility and Dexterous Manipulation!</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="50323e65d5091db34b02096f030384de" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/muu3Bqo9RkI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.robotera.com/en/">ROBOTERA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="tnryo2uasws"><em>Meet UBTECH New-Gen of Industrial Humanoid Robot—Walker S2 makes multiple industry-leading breakthroughs! Walker S2 is the world’s first humanoid robot to achieve 3-minute autonomous battery swapping and 24/7 continuous operation.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4b40cbf9fc7b06c4434cb971868eb5be" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/TNryO2uasws?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.ubtrobot.com/en/">UBTECH</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="xfjjoe8to5a"><em>ARMstrong Dex is a human-scale dual-arm hydraulic robot developed by the Korea Atomic Energy Research Institute (KAERI) for disaster response. It can perform vertical pull-ups and manipulate loads over 50 kilograms, demonstrating strength beyond human capabilities. However, disaster environments also require agility and fast, precise movement. This test evaluated ARMstrong Dex’s ability to throw a 500-milliliter water bottle (0.5 kg) into a target container. The experiment assessed high-speed coordination, trajectory control, and endpoint accuracy, which are key attributes for operating in dynamic rescue scenarios.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2793c53ff1951bfe0e34092eac131981" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/XfjjOE8TO5A?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://kaeri.ust.ac.kr/">KAERI</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="mrgeroghkhy">This is not a humanoid robot, it’s a data-acquisition platform.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a360087cbb510c147995340c59151532" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/MRgErOGhKHY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://pndbotics.com/">PNDbotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="r_ioywadb9s">Neat feature on this drone to shift the battery back and forth to compensate for movement of the arm.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ab3ae4bae96a4fa9752ef0dabb021e01" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/R_ioYwADb9s?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.mdpi.com/2504-446X/9/8/516">Paper</a> ] via [ <a href="https://www.mdpi.com/journal/drones" target="_blank">Drones journal</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="fsyhcolqery"><em>As residential buildings become taller and more advanced, the demand for seamless and secure in-building delivery continues to grow. In high-end apartments and modern senior living facilities where couriers cannot access upper floors, robots like FlashBot Max are becoming essential. In this featured elderly care residence, FlashBot Max completes 80 to 100 deliveries daily, seamlessly navigating elevators, notifying residents upon arrival, and returning to its charging station after each delivery.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a72d2a87a0d94e8788022c075095da92" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/fsyhcoLqeRY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.pudurobotics.com/en">Pudu Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="r3u9kidbam4">“How to Shake Trees With Aerial Manipulators.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="529cb4aabb607a6273e945f8678d9595" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/R3U9KidBAM4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://grvc.us.es/">GRVC</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="o72wi-txvlk"><em>We see a future where seeing a cobot in a hospital delivering supplies feels as normal as seeing a tractor in a field. Watch our CEO Brad Porter share what robots moving in the world should feel like.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b1600247076aff89e9020d7fbbc6ac8c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/o72wi-txvlk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.co.bot/">Cobot</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jdps5thwy8u"><em>Introducing the Engineered Arts UI for robot Roles, it’s now simple to set up a robot to behave exactly the way you want it to. We give a quick overview of customization for languages, personality, knowledge, and abilities. All of this is done with no code. Just simple LLM prompts, drop-down list selections and some switches to enable the features you need.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4cd998525df324cf4923429a901818b7" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/jDPs5thwY8U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://engineeredarts.com/">Engineered Arts</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8s9tjrz01fo"><em>Unlike most quadrupeds, CARA doesn’t use any gears or pulleys. Instead, her joints are driven by rope through capstan drives. Capstan drives offer several advantages: zero backlash, high torque transparency, low inertia, low cost, and quiet operation. These qualities make them an ideal speed reducer for robotics.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6dcc8b380ec3dcc15b9096a812d35d56" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8s9TjRz01fo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.aaedmusa.com/projects/cara">CARA</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 25 Jul 2025 16:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-skyfall-mars-helicopter</guid><category>Video friday</category><category>Robotics</category><category>Mars helicopter</category><category>Drones</category><category>Humanoid robots</category><category>Quadruped robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/artist-s-concept-of-a-drone-deployment-system-on-mars-6-propellers-connected-by-latticed-scaffolding-and-a-protective-shell-abo.png?id=61322340&width=980"></media:content></item><item><title>DeepMind’s Quest for Self-Improving Table Tennis Agents</title><link>https://spectrum.ieee.org/deepmind-table-tennis-robots</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robots-playing-ping-pong-on-an-automated-table-in-a-tech-lab-setting.gif?id=61214827&width=1245&height=700&coordinates=0%2C0%2C0%2C0"/><br/><br/><p>Hardly a day goes by without impressive new robotic platforms emerging from academic labs and commercial startups worldwide. <a href="https://spectrum.ieee.org/humanoid-robots" target="_blank">Humanoid robots</a> in particular look increasingly capable of assisting us in factories and eventually in homes and hospitals. Yet, for these machines to be truly useful, they need sophisticated “brains” to control their robotic bodies. Traditionally, programming robots involves experts spending countless hours meticulously scripting complex behaviors and exhaustively tuning parameters, such as controller gains or motion-planning weights, to achieve desired performance. While machine learning (ML) techniques have promise, robots that need to learn new complex behaviors still require substantial human oversight and reengineering. At <a href="https://deepmind.google/" target="_blank">Google DeepMind</a>, we asked ourselves: How do we enable robots to learn and adapt more holistically and continuously, reducing the bottleneck of expert intervention for every significant improvement or new skill?</p><p>This question has been a driving force behind our robotics research. We are exploring paradigms where two robotic agents playing against each other can achieve a greater degree of autonomous self-improvement, moving beyond systems that are merely preprogrammed with fixed or narrowly adaptive ML models toward agents that can learn a broad range of skills on the job. Building on our previous work in ML with systems like <a href="https://spectrum.ieee.org/why-alphago-is-not-ai" target="_blank">AlphaGo</a> and <a href="https://spectrum.ieee.org/alphafold-proves-that-ai-can-crack-fundamental-scientific-problems" target="_blank">AlphaFold</a>, we turned our attention to the demanding sport of <a href="https://sites.google.com/view/competitive-robot-table-tennis/home" rel="noopener noreferrer" target="_blank">table tennis as a testbed</a>.</p><p>We chose table tennis precisely because it encapsulates many of the hardest challenges in robotics within a constrained, yet highly dynamic, environment. Table tennis requires a robot to master a confluence of difficult skills: Beyond just perception, it demands exceptionally precise control to intercept the ball at the correct angle and velocity and involves strategic decision-making to outmaneuver an opponent. These elements make it an ideal domain for developing and evaluating robust learning algorithms that can handle real-time interaction, complex physics, high-level reasoning and the need for adaptive strategies<span>—</span>capabilities that are directly transferable to applications like manufacturing and even potentially unstructured home settings.</p><h3>The Self-Improvement Challenge</h3><p>Standard machine learning approaches often fall short when it comes to enabling continuous, autonomous learning. Imitation learning, where a robot learns by mimicking an expert, typically requires us to provide vast numbers of human demonstrations for every skill or variation; this reliance on expert data collection becomes a significant bottleneck if we want the robot to continually learn new tasks or refine its performance over time. Similarly, reinforcement learning, which trains agents through trial-and-error guided by rewards or punishments, often necessitates that human designers meticulously engineer complex mathematical reward functions to precisely capture desired behaviors for multifaceted tasks, and then adapt them as the robot needs to improve or learn new skills, limiting scalability. In essence, both of these well-established methods traditionally involve substantial human involvement, especially if the goal is for the robot to continually self-improve beyond its initial programming. Therefore, we posed a direct challenge to our team: Can robots learn and enhance their skills with minimal or no human intervention during the learning-and-improvement loop?</p><h3>Learning Through Competition: Robot vs. Robot</h3><p><span>One innovative approach we explored mirrors the strategy used for AlphaGo: Have agents learn by competing against themselves. We experimented with having two robot arms play table tennis against each other, an</span><span> idea that is simple yet powerful. As one robot discovers a better strategy, its opponent is forced to adapt and improve, creating a cycle of escalating skill levels.</span></p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="a2d38ab45aa562a36fb1853a741828b6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/b9_OytzkWv8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-photo-credit" placeholder="Add Photo Credit..."> DeepMind </small> </p><p>To enable the extensive training needed for these paradigms, we engineered a fully autonomous table-tennis environment. This setup allowed for continuous operation, featuring automated ball collection as well as remote monitoring and control, allowing us to run experiments for extended periods without direct involvement. As a first step, we successfully trained a robot agent (replicated on both the robots independently) using reinforcement learning in simulation to play cooperative rallies. We fine-tuned the agent for a few hours in the real-world robot-versus-robot setup, resulting in a policy capable of holding long rallies. We then switched to tackling the competitive robot-versus-robot play.</p><p>Out of the box, the cooperative agent didn’t work well in competitive play. This was expected, because in cooperative play, rallies would settle into a narrow zone, limiting the distribution of balls the agent can hit back. Our hypothesis was that if we continued training with competitive play, this distribution would slowly expand as we rewarded each robot for beating its opponent. While promising, training systems through competitive self-play in the real world presented significant hurdles. The increase in distribution turned out to be rather drastic given the constraints of the limited model size. Essentially, it was hard for the model to learn to deal with the new shots effectively without forgetting old shots, and we quickly hit a local-minima in the training where after a short rally, one robot would hit an easy winner, and the second robot was not able to return it.</p><p>While robot-on-robot competitive play has remained a tough nut to crack, our team also investigated <a href="https://arxiv.org/abs/2408.03906" target="_blank">how the robot could play against humans competitively</a>. In the early stages of training, humans did a better job of keeping the ball in play, thus increasing the distribution of shots that the robot could learn from. We still had to develop a policy architecture consisting of low-level controllers with their detailed skill descriptors and a high-level controller that chooses the low-level skills, along with techniques for enabling a zero-shot sim-to-real approach to allow our system to adapt to unseen opponents in real time. In a user study, while the robot lost all of its matches against the most advanced players, it won all of its matches against beginners and about half of its matches against intermediate players, demonstrating solidly amateur human-level performance. Equipped with these innovations, plus a better starting point than cooperative play, we are in a great position to go back to robot-versus-robot competitive training and continue scaling rapidly.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="76be4312887661c93d0c360c138a1daa" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/EqQl-JQxToE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">DeepMind</small></p><h3>The AI Coach: VLMs Enter the Game</h3><p>A second intriguing idea we investigated leverages the power of <a href="https://spectrum.ieee.org/gemini-robotics" target="_blank">vision language models (VLMs)</a>, like Gemini. Could a VLM act as a coach, observing a robot player and providing guidance for improvement?</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Profile icon spins, transforming into colorful Gemini logo with sparkling star accent." class="rm-shortcode" data-rm-shortcode-id="b4bb1190d5d7ae71e04e64d200d0d04f" data-rm-shortcode-name="rebelmouse-image" id="975c2" loading="lazy" src="https://spectrum.ieee.org/media-library/profile-icon-spins-transforming-into-colorful-gemini-logo-with-sparkling-star-accent.gif?id=61227959&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">DeepMind</small></p><p>An important insight of this project is that VLMs can be leveraged for <em>explainable</em> robot policy search. Based on this insight, we developed the <a href="https://sites.google.com/asu.edu/sas-llm/" target="_blank">SAS Prompt</a> (summarize, analyze, synthesize), a single prompt that enables iterative learning and adaptation of robot behavior by leveraging the VLM’s ability to retrieve, reason, and optimize to synthesize new behavior. Our approach can be regarded as an early example of a new family of explainable policy-search methods that are entirely implemented within an LLM. Also, there is no reward function—the VLM infers the reward directly from the observations given in the task description. The VLM can thus become a coach that constantly analyzes the performance of the student and provides suggestions for how to get better.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="AI robot practicing ping pong with specific ball placements on a blue table." class="rm-shortcode" data-rm-shortcode-id="9fa625e55d5dcd43f19a1da3e758c81e" data-rm-shortcode-name="rebelmouse-image" id="c26ae" loading="lazy" src="https://spectrum.ieee.org/media-library/ai-robot-practicing-ping-pong-with-specific-ball-placements-on-a-blue-table.gif?id=61227979&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">DeepMind</small></p><h3>Toward Truly Learned Robotics: An Optimistic Outlook</h3><p>Moving beyond the limitations of traditional programming and ML techniques is essential for the future of robotics. Methods enabling autonomous self-improvement, like those we are developing, reduce the reliance on painstaking human effort. Our table-tennis projects explore pathways toward robots that can acquire and refine complex skills more autonomously. While significant challenges persist—stabilizing robot-versus-robot learning and scaling VLM-based coaching are formidable tasks—these approaches offer a unique opportunity. We are optimistic that continued research in this direction will lead to more capable, adaptable machines that can learn the diverse skills needed to operate effectively and safely in our unstructured world. The journey is complex, but the potential payoff of truly intelligent and helpful robotic partners make it worth pursuing.</p><div class="horizontal-rule"></div><p><span><em>The authors express their deepest appreciation to the Google DeepMind Robotics team and in particular David B. D’Ambrosio, Saminda Abeyruwan, Laura Graesser, Atil Iscen, Alex Bewley, and Krista Reymann for their invaluable contributions to the development and refinement of this work.</em></span></p>]]></description><pubDate>Mon, 21 Jul 2025 15:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/deepmind-table-tennis-robots</guid><category>Google deepmind</category><category>Machine learning</category><category>Table tennis</category><category>Robotics</category><dc:creator>Pannag Sanketi</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/robots-playing-ping-pong-on-an-automated-table-in-a-tech-lab-setting.gif?id=61214827&width=980"></media:content></item><item><title>Video Friday: Robot Metabolism</title><link>https://spectrum.ieee.org/video-friday-robot-metabolism</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/structures-in-various-environments-made-of-robotic-modules-ocean-desert-river-ruins-desert-tower-living-room.png?id=61241507&width=1245&height=700&coordinates=95%2C0%2C95%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em><a href="https://spectrum.ieee.org/topic/robotics/" target="_blank">IEEE Spectrum </a></em><span><a href="https://spectrum.ieee.org/topic/robotics/">robotics</a>. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="http://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="b1pvzckd5ji"><em>Columbia University researchers introduce a process that allows machines to “grow” physically by integrating parts from their surroundings or from other robots, demonstrating a step toward self-sustaining robot ecologies.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6bef167149cf11ff4e4e9e9e66d1fd21" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/B1pvZcKd5JI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://robotmetabolism.github.io/">Robot Metabolism</a>] via [<a href="https://www.engineering.columbia.edu/about/news/robots-grow-consuming-other-robots">Columbia</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="2amzgvk97ge"><em>We challenged ourselves to see just how far we could push Digit’s ability to stabilize itself in response to a disturbance. Utilizing state-of-the-art AI technology and robust physical intelligence, Digit can adapt to substantial disruptions, all without the use of visual perception.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c828e568ba0d5ed1523a7cf7fb45a5b1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/2amzGvk97GE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.agilityrobotics.com/">Agility Robotics</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="ts-ttxvfvq4"><em>We are presenting the Figure 03 (F.03) battery—a significant advancement in our core humanoid robot technology roadmap.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="68683e58a26e3a33f84fb10b67c57b4c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Ts-TtxvfVq4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>The effort that was put into safety for this battery is impressive. But I would note two things: The battery life is “5 hours of run time at peak performance” without saying what “peak performance” actually means, and 2-kilowatt fast charge still means over an hour to fully charge.</p><p>[<a href="https://www.figure.ai/news/f-03-battery-development">Figure</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="mhp1wglw5wk">Well this is a nifty idea.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5c0783aa6942a2e01ae2ca7eaf98c35c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/mHP1WGlw5Wk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.ubtrobot.com/en/">UBTECH</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="7-n_ps85gau"><em>PAPRLE is a plug-and-play robotic limb environment for flexible configuration and control of robotic limbs across applications. With PAPRLE, users can use diverse configurations of leader-follower pair for teleoperation. In the video, we show several teleoperation examples supported by PAPRLE.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1ed3ef9f31d8e66083cb96366fa584d6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/7-N_Ps85GaU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://uiuckimlab.github.io/paprle-pages/">PAPRLE</a>]</p><p>Thanks, Joohyung!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="53u-5wyp5wo">Always nice to see a robot with a carefully thought-out commercial use case in which it can just do robot stuff like a robot.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4b7fc247ab5a72a22a30f0fd1ada669d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/53U-5wyp5Wo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.cohesiverobotics.com/products/smart-welding-robotic-workcell">Cohesive Robotics</a>]</p><p>Thanks, David!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="fr0kporfax8"><em>We are interested in deploying autonomous legged robots in diverse environments, such as industrial facilities and forests. As part of the DigiForest project, we are working on new systems to autonomously build forest inventories with legged platforms, which we have deployed in the UK, Finland, and Switzerland.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f19b88a58f491faa7b1289d47f47ea3a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Fr0kPOrfaX8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://dynamic.robots.ox.ac.uk/projects/legged-robots/">Oxford</a>]</p><p>Thanks, Matias!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="selyj6_qt_k"><em>In this research we introduce a self-healing, biocompatible strain sensor using Galinstan and a Diels-Alder polymer, capable of restoring both mechanical and sensing functions after repeated damage. This highly stretchable and conductive sensor demonstrates strong performance metrics—including 80% mechanical healing efficiency and 105% gauge factor recovery—making it suitable for smart wearable applications.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5ee57942f4f6e0c40444d01c993aa20a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/SeLYJ6_qT_k?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://ieeexplore.ieee.org/document/11082470">Paper</a>]</p><p>Thanks, Bram!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="zlms46exidu">The “Amazing Hand” from Pollen Robotics costs less than $250.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="10bac1be623c9dc8bdd94c5c654179eb" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ZlmS46EXidU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://huggingface.co/blog/pollen-robotics/amazing-hand">Pollen</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="rkr4bjocbwq"><em>Welcome to our Unboxing Day! After months of waiting, our humanoid robot has finally arrived at Fraunhofer IPA in Stuttgart.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a590fa31a321a39dfe93303abf4732aa" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Rkr4BJOCbwQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I used to take stretching classes from <a href="https://www.guinnessworldrecords.com/world-records/fastest-time-to-enter-a-suitcase" target="_blank">a woman who could do this backwards in 5.43 seconds</a>.</p><p>[<a href="https://www.ipa.fraunhofer.de/en/press-media/humanoid-robots.html">Fraunhofer</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="xvpldqfq3mo"><em>At the Changchun stop of the VOYAGEX Music Festival on July 12, PNDbotics’ full-sized humanoid robot Adam took the stage as a keytar player with the famous Chinese musician Hu Yutong’s band.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="cc27a66748aaaac07bd050a4b7e2f97d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/xVPldQfq3Mo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://pndbotics.com/robot">PNDbotics</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jucvusfmjc8"><em>Material movement is the invisible infrastructure of hospitals, airports, cities–everyday life. We build robots that support the people doing this essential, often overlooked work. Watch our CEO Brad Porter reflect on what inspired Cobot.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ae7c83dc7d565fe18b3d9b4449f6df54" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/jUCVUsfmJc8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.co.bot/">Cobot</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="adhvcog4pxy">Yes please.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="429b871d5cbb145ea6c17570e2033ebc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AdHvcOG4pXY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.pollen-robotics.com/reachy/">Pollen</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="bnwzlhtq7us">I think I could get to the point of being okay with this living in my bathroom.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="415fc3b3d56eb41ecd3f354dccf6c88c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/bnwzLhTq7us?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://arxiv.org/abs/2507.06053">Paper</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="4sn8awtejpm"><em>Thanks to its social perception, high expressiveness, and out-of-the-box integration, TIAGo Head offers the ultimate human-robot interaction experience.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9cbf08bedf0f0a02753523694bc8f010" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/4SN8AWTEJPM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://pal-robotics.com/robot/tiago-head/">PAL Robotics</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="1aebdbj2w-u"><em>Sneak peek: Our No Manning Required Ship (NOMARS) Defiant unmanned surface vessel is designed to operate for up to a year at sea without human intervention. In-water testing is preparing it for an extended at-sea demonstration of reliability and endurance.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d4055b8e6e1810059c946097948380a6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1aebdBj2W-U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>Excellent name for any ship.</p><p>[<a href="https://www.darpa.mil/research/programs/no-manning-required-ship">DARPA</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="1kjcg3eailq"><em>At the 22nd International Conference on Ubiquitous Robots (UR2025), high school student and robotics researcher Ethan Hong was honored as a Special Invited Speaker for the conference banquet and “Robots With Us” panel. In this heartfelt and inspiring talk, Ethan shares the story behind Food Angel—a food delivery robot he designed and built to support people experiencing homelessness in Los Angeles. Motivated by the growing crises of homelessness and food insecurity, Ethan asked a simple but profound question: “Why not use robots to help the unhoused?”</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="121bff0259421dc6f204821f08c29986" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1KJcG3EaiLQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://2025.ubiquitousrobots.org/">UR2025</a>]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 18 Jul 2025 15:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-robot-metabolism</guid><category>Video friday</category><category>Robotics</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/structures-in-various-environments-made-of-robotic-modules-ocean-desert-river-ruins-desert-tower-living-room.png?id=61241507&width=980"></media:content></item><item><title>Video Friday: Reachy Mini Brings the Cute</title><link>https://spectrum.ieee.org/video-friday-reachy-mini</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/five-colorful-robots-in-a-row-with-antennae-resembling-toys-on-a-wooden-surface.jpg?id=61192609&width=1245&height=700&coordinates=0%2C80%2C0%2C80"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://ifac2025-msrob.com/">IFAC Symposium on Robotics</a>: 15–18 July 2025, PARIS</h5><h5><a href="https://2025.robocup.org/">RoboCup 2025</a>: 15–21 July 2025, BAHIA, BRAZIL</h5><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="jvdbjz-qr18"><em>Reachy Mini is an expressive, open-source robot designed for human-robot interaction, creative coding, and AI experimentation. Fully programmable in Python (and soon JavaScript, Scratch) and priced from $299, it’s your gateway into robotics AI: fun, customizable, and ready to be part of your next coding project.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b91b74227e6fd6ca5f9578445714e8b7" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/JvdBJZ-qR18?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I’m so happy that <a data-linked-post="2669372232" href="https://spectrum.ieee.org/video-friday-reachy-2" target="_blank">Pollen</a> and Reachy found a home with Hugging Face, but I hope they understand that they are never, ever allowed to change that robot’s face. O-o</p><p>[<a href="https://huggingface.co/blog/reachy-mini">Reachy Mini</a>] via [<a href="https://huggingface.co/" target="_blank">Hugging Face</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pinuclin3dg"><em>General-purpose robots promise a future where household assistance is ubiquitous and aging in place is supported by reliable, intelligent help. These robots will unlock human potential by enabling people to shape and interact with the physical world in transformative new ways. At the core of this transformation are Large Behavior Models (LBMs)—embodied AI systems that take in robot sensor data and output actions. LBMs are pretrained on large, diverse manipulation datasets and offer the key to realizing robust, general-purpose robotic intelligence. Yet despite their growing popularity, we still know surprisingly little about what today’s LBMs actually offer—and at what cost. This uncertainty stems from the difficulty of conducting rigorous, large-scale evaluations in real-world robotics. As a result, progress in algorithm and dataset design is often guided by intuition rather than evidence, hampering progress. Our work aims to change that.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="44fd8ff77c1ad6b3e4a08b866544d209" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/DeLpnTgzJT4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://toyotaresearchinstitute.github.io/lbm1/">Toyota Research Institute</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pinuclin3dg"><em>Kinisi Robotics is advancing the frontier of physical intelligence by developing AI-driven robotic platforms capable of high-speed, autonomous pick-and-place operations in unstructured environments. This video showcases Kinisi’s latest wheeled-base humanoid performing dexterous bin stacking and item sorting using closed-loop perception and motion planning. The system combines high-bandwidth actuation, multi-arm coordination, and real-time vision to achieve robust manipulation without reliance on fixed infrastructure. By integrating custom hardware with onboard intelligence, Kinisi enables scalable deployment of general-purpose robots in dynamic warehouse settings, pushing toward broader commercial readiness for embodied AI systems.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9f48158ddbd19d31e7ec30146f0e9f8e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/PinUCliN3dg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.kinisi.com/">Kinisi Robotics</a>]</p><p>Thanks, Bren!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="ay_-z9m18p0"><em>In this work, we develop a data collection system where human and robot data are collected and unified in a shared space and propose a modularized cross-embodiment Transformer that is pretrained on human data and fine-tuned on robot data. This enables high data efficiency and effective transfer from human to quadrupedal embodiments, facilitating versatile manipulation skills for unimanual and bimanual, non-prehensile and prehensile, precise tool-use, and long-horizon tasks, such as cat litter scooping!</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="fc677f3b22187a3788fbb621e9ff349e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ay_-z9M18p0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://human2bots.github.io/">Human2LocoMan</a>]</p><p>Thanks, Yaru!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="clfuhynfoey"><em>LEIYN is a quadruped robot equipped with an active waist joint. It achieves the world’s fastest chimney climbing through dynamic motions learned via reinforcement learning.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="437700e4aa4488bc7927c869b14d887e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/cLfUhyNFOeY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.jsk.t.u-tokyo.ac.jp/">JSK Lab</a>]</p><p>Thanks, Keita!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="k0ad-jver10"><a data-linked-post="2658569185" href="https://spectrum.ieee.org/quadrupedal-robot-shins-turns-biped" target="_blank">Quadrupedal robots</a> are really just bipedal robots that haven’t learned to walk on two legs yet.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="05f047fee8ebc6b334275b819887f077" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/k0aD-JvER10?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://arclab.hku.hk/">Adaptive Robotic Controls Lab, University of Hong Kong</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="6zzuyh0jlha"><em>This study introduces a biomimetic self-healing module for tendon-driven legged robots that uses robot motion to activate liquid metal sloshing, which removes surface oxides and significantly enhances healing strength. Validated on a life-sized monopod robot, the module enables repeated squatting after impact damage, marking the first demonstration of active self-healing in high-load robotic applications.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9f330f019896b3e7dd879963b695620e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/6ZZuyH0jLhA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://haraduka.github.io/">University of Tokyo</a>]</p><p>Thanks, Kento!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="mcuo_03kezi">That whole putting wheels on quadruped robots thing was a great idea that someone had way back when.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9f136c6987283ea4684833db4f6c59dd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Mcuo_03kezI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.pudurobotics.com/en">Pudu Robotics</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="yswx94xbluc">I know nothing about this video except that it’s very satisfying and comes from a YouTube account that hasn’t posted in six years.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7cdbf8d1e7b9e81ee1086507af5e117d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/yswX94XBLuc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.youtube.com/@young-jaebae2863">Young-jae Bae YouTube</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="wnprlir4zbw"><em>Our AI WORKER now comes in a new Swerve Drive configuration, optimized for logistics environments. With its agile and omnidirectional movement, the swerve-type mobile base can efficiently perform various logistics tasks such as item transport, shelf navigation, and precise positioning in narrow aisles.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f600956954a1856dbafc6a146241fee0" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/WNpRlIr4zbw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>Wait, you can have a bimanual humanoid without legs? I am shocked.</p><p>[<a href="https://ai.robotis.com/">ROBOTIS</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="v2xngkxqrlu">I can’t tell whether I need an office assistant, or if I just need snacks.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="06aedcff278ab2202361552c4d86234e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/v2XnGkXQrLU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://pndbotics.com/humanoid">PNDbotics</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="xgfsctqzakw">“MagicBot Z1: Atomic kinetic energy, the brave are fearless,” says the MagicBot website. Hard to argue with that!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="da2c39cfb4b53141144af4a627794763" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/XGfSCtQzaKw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.magiclab.top/z1">MagicLab</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="dd3ybkoukhk"><em>We’re excited to announce our new HQ in Palo Alto [Calif.]. As we grow, consolidating our Sunnyvale [Calif.] and Moss [Norway] team under one roof will accelerate our speed to ramping production and getting NEO into homes near you.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="528b27f97c27c09c81df91ac148de70b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dD3YBkOUKHk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I’m not entirely sure that moving from Norway to California is an upgrade, honestly.</p><p>[<a href="https://www.1x.tech/">1X</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="drv_a8pxr0y"><em>Jim Kernan, chief product officer at Engineered Arts, shares how they’re commercializing humanoid robots—blending AI, expressive design, and real-world applications to build trust and engagement.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="95eb6a1fbbca56e2408cb4849008c517" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/DrV_A8PXR0Y?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://humanoidssummit.com/">Humanoids Summit</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="l0wizdtovq4"><em>In the second installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots <a data-linked-post="2650275007" href="https://spectrum.ieee.org/astro-teller-captain-of-moonshots-at-x" target="_blank">Astro Teller</a> sits down with André Prager, former chief engineer at Wing, for a conversation about the early days of Wing and how the team solved some of their toughest engineering challenges to develop simple, lightweight, inexpensive delivery drones that are now being used every day across three continents.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0eac2eb9f72293166a6b92d5588ea7cd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/L0WIZDtovQ4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://x.company/moonshotpodcast/">Moonshot Podcast</a>]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 11 Jul 2025 16:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-reachy-mini</guid><category>Robotics events</category><category>Video friday</category><category>Robotics</category><category>Human-robot interaction</category><category>Humanoid robots</category><category>Quadruped robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/five-colorful-robots-in-a-row-with-antennae-resembling-toys-on-a-wooden-surface.jpg?id=61192609&width=980"></media:content></item><item><title>Video Friday: Cyborg Beetles May Speed Disaster Response One Day</title><link>https://spectrum.ieee.org/video-friday-cyborg-beetles</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/black-beetle-with-an-electronics-backpack-on-human-fingers-with-colorful-plastic-boxes-in-the-background.jpg?id=61144831&width=1245&height=700&coordinates=0%2C292%2C0%2C292"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2025.worldhaptics.org/">IEEE World Haptics</a>: 8–11 July 2025, SUWON, SOUTH KOREA</h5><h5><a href="https://ifac2025-msrob.com/">IFAC Symposium on Robotics</a>: 15–18 July 2025, PARIS</h5><h5><a href="https://2025.robocup.org/">RoboCup 2025</a>: 15–21 July 2025, BAHIA, BRAZIL</h5><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://actuate.foxglove.dev/">ACTUATE 2025</a>: 23–24 September 2025, SAN FRANCISCO</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="cydb5qqkrcm"><em>Common beetles equipped with microchip backpacks could one day be used to help search-and-rescue crews locate survivors within hours instead of days following disasters such as building and mine collapses. The University of Queensland’s Dr. Thang Vo-Doan and Research Assistant Lachlan Fitzgerald have demonstrated they can remotely guide darkling beetles (</em>Zophobas morio<em>) fitted with the packs via video-game controllers.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3c411b8622085e3da169aaaa0ff05a95" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/CyDB5qqKrCM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://advanced.onlinelibrary.wiley.com/doi/10.1002/advs.202502095">Paper</a> ] via [ <a href="https://www.uq.edu.au/news/article/2025/07/cyborg%E2%80%99-beetles-could-revolutionise-urban-search-and-rescue">University of Queensland</a> ]</p><p>Thanks, Thang!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="n0iqenjpzrw"><em>This is our latest work about six-doF hand-based teleoperation for omnidirectional aerial robots, which shows an intuitive teleoperation system for advanced aerial robot. This work has been presented in 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025).</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="50241ecef56c2efb848bd4d89107a475" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/n0IQEnjPzrw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dragon.t.u-tokyo.ac.jp/">DRAGON Lab</a> ]</p><p>Thanks, Moju!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="w6f8arzgnky">Pretty sure we’ve seen this LimX humanoid before, and we’re seeing it again right now, but hey, the first reveal is just ahead!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4743f924ab14e542a8fb0252b93b441f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/w6F8aRZGnkY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]</p><p>Thanks, Jinyan!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="dyrlhxhyira"><em>Soft robot arms use soft materials and structures to mimic the passive compliance of biological arms that bend and extend. Here, we show how relying on patterning structures instead of inherent material properties allows soft robotic arms to remain compliant while continuously transmitting torque to their environment. We demonstrate a <a data-linked-post="2650274519" href="https://spectrum.ieee.org/robot-octopus-points-the-way-to-soft-robotics-with-eight-wiggly-arms" target="_blank">soft robotic arm</a> made from a pair of <a data-linked-post="2656604055" href="https://spectrum.ieee.org/shapeshifting-robots" target="_blank">mechanical metamaterials</a> that act as compliant constant-velocity joints.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c868a92a668f5816c93e182b5388b78c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dyrLHXHYirA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.science.org/doi/10.1126/scirobotics.ads0548">Paper</a> ] via [ <a href="https://transformativerobotlab.com/">Transformative Robotics Lab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="iu--yylt3mk">Selling a platform is really hard, but I hope K-Scale can succeed with its open source humanoid.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6bf60d512403bd21bfb36290236e15fa" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/IU--YylT3Mk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.kscale.dev/">K-Scale</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="di1j4apcbmc"><em>MIT CSAIL researchers combined GenAI and a physics simulation engine to refine <a data-linked-post="2656053300" href="https://spectrum.ieee.org/robot-design" target="_blank">robot designs</a>. The result: a machine that outjumped a robot designed by humans.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6c399a5107100a9973527f07054c4463" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dI1J4aPcbmc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://news.mit.edu/2025/using-generative-ai-help-robots-jump-higher-land-safely-0627">MIT News</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="30cmkhijumc"><em>ARMstrong Dex is a human-scale dual-arm hydraulic robot under development at the Korea Atomic Energy Research Institute (KAERI) for disaster-response applications. Designed with dimensions similar to those of an adult human, it combines human-equivalent reach and dexterity with force output that exceeds human physical capabilities, enabling it to perform extreme heavy-duty tasks in hazardous environments.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b053e3030c7f62cbc174c6f516399de0" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/30cmKhijuMc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.kaeri.re.kr/eng/">Korea Atomic Energy Research Institute</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="ofi_upx2jfk"><em>This is a demonstration of in-hand object rotation with Torobo Hand. Torobo Hand is modeled in simulation, and a control policy is trained within several hours using large-scale parallel reinforcement learning in Isaac Sim. The trained policy can be executed without any additional training in both a different simulator (MuJoCo) and on the real Torobo Hand.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2cef42e8123b29fb43928e94feb5ecea" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ofI_uPx2Jfk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.tokyo/products/hand/">Tokyo Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="dpgjkycg1x8"><em>Since 2005, Ekso Bionics has been developing and manufacturing <a data-linked-post="2650274760" href="https://spectrum.ieee.org/paraplegic-man-walks-in-ekso-robotic-exoskeleton-to-demo-its-killer-app" target="_blank">exoskeleton bionic devices</a> that can be strapped on as wearable robots to enhance the strength, mobility, and endurance of soldiers, patients, and workers. These robots have a variety of applications in the medical, military, industrial, and consumer markets, helping rehabilitation patients walk again and workers preserve their strength.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d70dc7e4d58d46ecab6c6ca587582da6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/DPgjKyCg1X8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://eksobionics.com/">Ekso Bionics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="tailjwskhfq"><em>Sponsored by Raytheon, an RTX business, the 2025 east coast Autonomous Vehicle Competition was held at XElevate in Northern Virginia. Student Engineering Teams from five universities participated in a two-semester project to design, develop, integrate, and compete two autonomous vehicles that could identify, communicate, and deliver a medical kit with the best accuracy and time.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="316493a24f90bedaf0ee53333043ea62" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/TaiLjWskhFQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.rtx.com/">RTX</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="bqxtmweafso">This panel is from the Humanoids Summit in London: “Investing in the Humanoids Robotics Ecosystem—a VC Perspective.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1447102cae50b42757c27dee64e6e6bb" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/BqxTMwEAFSo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://humanoidssummit.com/hslondon2025agenda">Humanoids Summit</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Sat, 05 Jul 2025 15:04:05 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-cyborg-beetles</guid><category>Video friday</category><category>Robotics</category><category>Cyborg</category><category>Humanoid robots</category><category>Exoskeleton</category><category>Autonomous vehicles</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/black-beetle-with-an-electronics-backpack-on-human-fingers-with-colorful-plastic-boxes-in-the-background.jpg?id=61144831&width=980"></media:content></item><item><title>Robotic Arm “Feels” Using Sound</title><link>https://spectrum.ieee.org/farm-robots-sound-based-sensing</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-robotic-arm-making-contact-with-a-leafy-branch-during-a-sound-experiment.jpg?id=61106569&width=1245&height=700&coordinates=0%2C643%2C0%2C644"/><br/><br/><p><em>This article is part of our exclusive </em><a href="https://spectrum.ieee.org/collections/journal-watch/" target="_self"><em>IEEE Journal Watch series</em></a><em> in partnership with <a href="https://spectrum.ieee.org/tag/ieee-xplore" target="_self">IEEE Xplore</a>.</em></p><p>Agricultural robots could help farmers harvest food under tough environmental conditions, especially as temperatures continue to rise. However, creating affordable robotic arms that can gracefully and accurately navigate the thick network of branches and trunks of plants can be challenging. </p><p>In a recent study, <a href="https://ieeexplore.ieee.org/document/11021384" target="_blank">researchers developed a sensing system</a>, called SonicBoom, which allows autonomous robots to use sound to sense the objects it touches. The approach, which can accurately localize or “feel” the objects it encounters with centimeter-level precision, is described in a study published 2 June in <a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7083369" rel="noopener noreferrer" target="_blank"><em>IEEE Robotics and Automation Letters</em></a>. </p><p>Moonyoung (Mark) Lee is<strong> </strong>a fifth-year Ph.D. student at Carnegie Mellon University’s Robotics Institute who was involved in developing SonicBoom. He notes that many autonomous robots currently rely on a collection of tiny camera-based <a href="https://spectrum.ieee.org/tag/tactile-sensors" target="_blank">tactile sensors</a>. Minicameras beneath a protective gel pack that lines the robot’s surface let the sensors visually estimate the gel’s deformation to gain tactile information. However, this approach isn’t ideal in agricultural settings, when branches are likely to occlude the visual sensors. What’s more, camera-based sensors can be expensive and could be easily damaged in this context. </p><p>Another option is pressure sensors, Lee notes, but these would need to cover much of the surface area of the robot in order to effectively sense when it comes into contact with branches. “Imagine covering the entire robot arm surface with that kind of [sensor]. It would be expensive,” he says.</p><p>Instead, Lee and his colleagues are proposing a completely different approach that relies on sound for sensing. The system involves an array of contact microphones, which detect physical touch as sound signals that propagate through solid materials.</p><h2>How Does SonicBoom Work?</h2><p>When a robotic arm touches a branch, the resulting sound waves travel down the robotic arm until they encounter the array of contact microphones. Tiny differences in sound-wave properties (such as signal intensity and phase) across the array of microphones are used to localize where the sound originated, and thus the point of contact.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="bfd2795046c4d1b7896487f0ca8f37fe" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/4bdHyQtuqrM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">In this video, see SonicBoom in action during laboratory testing. </small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit..."> <a href="https://youtu.be/4bdHyQtuqrM" target="_blank">youtu.be</a> </small> </p><p>Lee notes that this approach allows microphones to be embedded deeper in the robotic arm. This means they are less prone to damage compared to traditional visual sensors on the exterior of a robotic arm. “The contact microphones can be easily protected from very harsh, abrasive contacts,” he explains.</p><p>As well, the approach uses a small handful of microphones dispersed across the robotic arm, rather than many visual or pressure sensors more densely coating it.</p><p>To help <a href="https://iamlab-cmu.github.io/sonicboom/" target="_blank">SonicBoom</a> better localize points of contact, the researchers used an AI model, trained on data collected by tapping the robotic arm more than 18,000 times with a wooden rod. As a result, SonicBoom was able to localize contact on the robotic arm with an error rate of just 0.43 centimeters for objects it was trained to detect. It was also able to detect novel objects, for example ones made of plastic or aluminum, with an error rate of 2.22 cm. </p><p>In a subsequent study pending publication, Lee and his colleagues are using <a href="https://arxiv.org/abs/2505.12665" target="_blank">new data to train SonicBoom</a> to identify what kind of object its encountering<span>—</span>for example, a leaf, branch, or trunk. </p><p>“With SonicBoom, you can blindly tap around and know where the [contact happens], but at the end of the day, for the robot, the really important information is: Can I keep pushing, or am I hitting a strong trunk and should rethink how to move my arm?” he explains. </p><p>Of note, SonicBoom has yet to be tested in real-world agricultural settings, Lee says. </p>]]></description><pubDate>Sat, 28 Jun 2025 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/farm-robots-sound-based-sensing</guid><category>Agricultural robots</category><category>Autonomous robots</category><category>Journal watch</category><category>Farming</category><dc:creator>Michelle Hampson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-robotic-arm-making-contact-with-a-leafy-branch-during-a-sound-experiment.jpg?id=61106569&width=980"></media:content></item><item><title>Video Friday: This Quadruped Throws With Its Whole Body</title><link>https://spectrum.ieee.org/robot-arm-thrower</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/quadruped-robot-with-manipulator-arm-placed-on-pavement-near-a-table-tennis-setup.png?id=61112174&width=1245&height=700&coordinates=0%2C79%2C0%2C80"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://ias-19.org/">IAS 2025</a>: 30 June–4 July 2025, GENOA, ITALY</h5><h5><a href="https://clawar.org/icres2025/">ICRES 2025</a>: 3–4 July 2025, PORTO, PORTUGAL</h5><h5><a href="https://2025.worldhaptics.org/">IEEE World Haptics</a>: 8–11 July 2025, SUWON, SOUTH KOREA</h5><h5><a href="https://ifac2025-msrob.com/">IFAC Symposium on Robotics</a>: 15–18 July 2025, PARIS</h5><h5><a href="https://2025.robocup.org/">RoboCup 2025</a>: 15–21 July 2025, BAHIA, BRAZIL</h5><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="3ysgbn6ca8a"><em>Throwing is a fundamental skill that enables robots to manipulate objects in ways that extend beyond the reach of their arms. We present a control framework that combines learning and model-based control for prehensile whole-body throwing with legged mobile manipulators. This work provides an early demonstration of prehensile throwing with quantified accuracy on hardware, contributing to progress in dynamic whole-body manipulation.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c380e7cc77c2ff7df6924bc3589434b5" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/3ysgbN6Ca8A?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arxiv.org/abs/2506.16986">Paper</a> ] from [ <a href="https://ethz.ch/en/studies/master/degree-programmes/engineering-sciences/robotics-systems-and-control.html" target="_blank">ETH Zurich</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="wgr7iizzfr0">As it turns out, in many situations <a data-linked-post="2666662286" href="https://spectrum.ieee.org/humanoid-robots" target="_blank">humanoid robots</a> don’t necessarily need legs at all.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="47302ecf2f03d6151a2dfd94f1ce29d5" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/WgR7IIzzfR0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.robotera.com/en/enq5">ROBOTERA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="xmbuvofnu5a"><em>Picking-in-Motion is a brand new feature of Autopicker 2.0. Instead of remaining stationary while picking an item, Autopicker begins traveling toward its next destination immediately after retrieving a storage tote, completing the pick while on the move. The robot then drops off the first storage tote at an empty slot near the next pick location before collecting the next tote.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="8403fb62dce8c4ce1457ec30a8bb67f5" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/xmbuvoFNu5A?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://brightpick.ai/">Brightpick</a> ]</p><p>Thanks, Gilmarie!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="rwsb78emfgi">I am pretty sure this is not yet real, but boy is it shiny.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c8476f7f896244eb4a23e8481b9be4b7" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/rWSb78EmFGI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.softbank.jp/corp/philosophy/technology/special/ntn-solution/haps/">SoftBank</a> ] via [ <a href="https://robotstart.info/2025/06/26/sb-haps-2026.html">RobotStart</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="lf5fw4qunlk">Why use one thumb when you can use two instead?</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3059dc10a2edb86d9db46fdcd9e0bde6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/LF5fW4qUnlk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.tu.berlin/en/robotics">TU Berlin</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="3fwefdibpk0"><em>Kirigami offers unique opportunities for guided morphing by leveraging the geometry of the cuts. This work presents inflatable <a data-linked-post="2650276722" href="https://spectrum.ieee.org/artificial-snakeskin-helps-robots-get-their-slither-on" target="_blank">kirigami crawlers</a> created by introducing cut patterns into heat-sealable textiles to achieve locomotion upon cyclic pneumatic actuation. We found that the kirigami actuators exhibit directional anisotropic friction properties when inflated, having higher friction coefficients against the direction of the movement, enabling them to move across surfaces with varying roughness. We further enhanced the functionality of inflatable kirigami actuators by introducing multiple channels and segments to create functional soft robotic prototypes with versatile locomotion capabilities.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="772fc720afe02dd29601706a74c39c62" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/3fWEFDibPK0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://advanced.onlinelibrary.wiley.com/doi/10.1002/adrr.202500044">Paper</a> ] from [ <a href="https://www.softrobotics.dk/" target="_blank">SDU Soft Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="1gviu3trrps">Lockheed Martin wants to get into the <a data-linked-post="2652904040" href="https://spectrum.ieee.org/mars-sample-return-mission" target="_blank">Mars Sample Return</a> game for a mere US $3 billion.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d6f03aad050bbbe77173bfc37241875c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1GViU3tRRps?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://lockheedmartin.com/en-us/news/features/2025/bringing-commercial-industry-efficiency-to-exploration-lockheed-martins-plan-for-mars-sample-return.html">Lockheed Martin</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="9hwyniiw4nm">This is pretty gross and exactly what you want a robot to be doing: dealing with municipal solid waste.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3a9e942998b6b6ad57bfd01a18012ab7" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/9HWYNIiW4NM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.terex.com/zenrobotics/waste-types/municipal-solid-waste">ZenRobotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="h5z32e7uakm"><em>Drag your mouse or move your phone to explore this 360-degree panorama provided by <a data-linked-post="2650269754" href="https://spectrum.ieee.org/curiosity-turns-one-on-mars" target="_blank">NASA’s Curiosity Mars rover.</a> This view shows some of the rover’s first looks at a region that has only been viewed from space until now, and where the surface is crisscrossed with spiderweb-like patterns.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1de2285b55a5583bba358034225f6c47" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/H5z32E7uaKM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://science.nasa.gov/mission/msl-curiosity/">NASA Jet Propulsion Laboratory</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="hpvza8rfiks">In case you were wondering, <a data-linked-post="2667116159" href="https://spectrum.ieee.org/irobot-amazon" target="_blank">iRobot</a> is still around.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="94f11994637afc79d347441f03a021f7" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/hPVZA8rfiKs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.irobot.com/">iRobot</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="shmgjzkqzdm">Legendary roboticist <a data-linked-post="2650271281" href="https://spectrum.ieee.org/cynthia-breazeal-unveils-jibo-a-social-robot-for-the-home" target="_blank">Cynthia Breazeal</a> talks about the equally legendary Personal Robots Group at the MIT Media Lab.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="96578b094f03b3ec1f7d6396f02f9a1c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/shMGJZkQzDM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.media.mit.edu/groups/personal-robots/overview/">MIT Personal Robots Group</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="kj0mp74v4kq"><em>In the first installment of our Moonshot Podcast Deep Dive video interview series, X’s Captain of Moonshots <a data-linked-post="2650275007" href="https://spectrum.ieee.org/astro-teller-captain-of-moonshots-at-x" target="_blank">Astro Teller</a> sits down with <a data-linked-post="2650257886" href="https://spectrum.ieee.org/sebastian-thrun-will-teach-you-how-to-build-your-own-self-driving-car-for-free" target="_blank">Sebastian Thrun</a>, cofounder of the Moonshot Factory, for a conversation about the history of Waymo and Google X, the ethics of innovation, the future of AI, and more.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="54cee25119337253957257dc6f4bd176" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/kj0mp74V4kQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://x.company/">Google X, The Moonshot Factory</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 27 Jun 2025 16:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/robot-arm-thrower</guid><category>Robotics</category><category>Video friday</category><category>Manipulators</category><category>Crawler</category><category>Industrial robots</category><category>Humanoid robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/quadruped-robot-with-manipulator-arm-placed-on-pavement-near-a-table-tennis-setup.png?id=61112174&width=980"></media:content></item><item><title>Video Friday: Jet-Powered Humanoid Robot Lifts Off</title><link>https://spectrum.ieee.org/video-friday-jet-powered-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/close-up-of-a-humanoid-robot-showing-intricate-mechanical-components-and-wiring-with-small-jet-engines-on-its-arms-and-torso.jpg?id=61079219&width=1245&height=700&coordinates=0%2C112%2C0%2C113"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://roboticsconference.org/">RSS 2025</a>: 21–25 June 2025, LOS ANGELES</h5><h5><a href="https://robotx.ethz.ch/education/summer-school.html">ETH Robotics Summer School</a>: 21–27 June 2025, GENEVA</h5><h5><a href="https://ias-19.org/">IAS 2025</a>: 30 June–4 July 2025, GENOA, ITALY</h5><h5><a href="https://clawar.org/icres2025/">ICRES 2025</a>: 3–4 July 2025, PORTO, PORTUGAL</h5><h5><a href="https://2025.worldhaptics.org/">IEEE World Haptics</a>: 8–11 July 2025, SUWON, SOUTH KOREA</h5><h5><a href="https://ifac2025-msrob.com/">IFAC Symposium on Robotics</a>: 15–18 July 2025, PARIS</h5><h5><a href="https://2025.robocup.org/">RoboCup 2025</a>: 15–21 July 2025, BAHIA, BRAZIL</h5><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="t1bnhot4d5q"><em>This is the first successful vertical takeoff of a jet-powered flying humanoid robot, developed by Artificial and Mechanical Intelligence (AMI) at Istituto Italiano di Tecnologia (IIT). The robot lifted ~50 cm off the ground while maintaining dynamic stability, thanks to advanced AI-based control systems and aerodynamic modeling.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2666111cae7290d9efdf2108af79560f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/t1bNHoT4D5Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>We will have much more on this in the coming weeks!</p><p>[<a href="https://www.nature.com/articles/s44172-025-00447-w">Nature</a>] via [<a href="https://opentalk.iit.it/en/iit-demonstrates-that-a-humanoid-robot-can-fly/">IIT</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="iwcnynpjnm0"><em>As a first step toward our mission of deploying general-purpose robots, we are pushing the frontiers of what end-to-end AI models can achieve in the real world. We’ve been training models and evaluating their capabilities for dexterous sensorimotor policies across different embodiments, environments, and physical interactions. We’re sharing capability demonstrations on tasks stressing different aspects of manipulation: fine motor control, spatial and temporal precision, generalization across robots and settings, and robustness to external disturbances.</em></blockquote><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="3fbcd7173da8b61dcd8567ca2932e4fd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/mhfleCK_IAI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://generalistai.com/blog.html">Generalist AI</a>]</p><p>Thanks, Noah!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="iwcnynpjnm0"><em>Ground Control Robotics is introducing SCUTTLE, our newest elongate multilegged platform for mobility anywhere!</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="da081e0bf9207ff62b17fbdc33a083c9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/IWcNyNPjnM0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://groundcontrolrobotics.com/">Ground Control Robotics</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="udkddqxth5q"><em>Teleoperation has been around for a while, but what hasn’t been is precise, real-time force feedback. That’s where Flexiv steps in to shake things up. Now, whether you’re across the room or across the globe, you can experience seamless, high-fidelity remote manipulation with a sense of touch.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ec233d3290c27dd0cea84b0f17763f3e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/udkddqxth5Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>This sort of thing usually takes some human training, for which you’d be best served by <a data-linked-post="2657676851" href="https://spectrum.ieee.org/video-friday-iss-robot-arms" target="_blank">robot arms</a> with <a data-linked-post="2650273235" href="https://spectrum.ieee.org/esa-space-teleoperation-tests" target="_blank">precise, real-time force feedback</a>. Hmm, I wonder where you’d find those...?</p><p>[<a href="https://www.flexiv.com/">Flexiv</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="xpx6ddrybv4"><em>The 1X World Model is a data-driven simulator for humanoid robots built with a grounded understanding of physics. It allows us to predict—or “hallucinate”—the outcomes of NEO’s actions before they’re taken in the real world. Using the 1X World Model, we can instantly assess the performance of AI models—compressing development time and providing a clear benchmark for continuous improvement.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bbf4dbf833d6ae5f2f31d30e3867918e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/xPX6dDRYbV4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.1x.tech/">1X</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="atr87nwq3eq"><em>SLAPBOT is an interactive robotic artwork by Hooman Samani and Chandler Cheng, exploring the dynamics of physical interaction, artificial agency, and power. The installation features a robotic arm fitted with a soft, inflatable hand that delivers slaps through pneumatic actuation, transforming a visceral human gesture into a programmed robotic response.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="98480c72f39d07a271776fa0143a9b44" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ATR87nwq3eQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I asked, of course, whether SLAPBOT slaps people, and it does not: “Despite its provocative concept and evocative design, SLAPBOT does not make physical contact with human participants. It simulates the gesture of slapping without delivering an actual strike. The robotic arm’s movements are precisely choreographed to suggest the act, yet it maintains a safe distance.”</p><p>[<a href="https://hoomansamani.com/slapbot/">SLAPBOT</a>]</p><p>Thanks, Hooman!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="xht3nvc9d-i">Inspecting the bowels of ships is something we’d really like robots to be doing for us, please and thank you.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f8e59dc2b2d1ebc58f4a8fa063562914" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/XhT3nVC9d-I?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.ntnu.edu/itk/research/robotics" target="_blank">Norwegian University of Science and Technology</a>] via [<a href="https://github.com/ntnu-arl/predictive_planning_ros">GitHub</a>]</p><p>Thanks, Kostas!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8a46uap367k"><em>H2L Corporation (hereinafter referred to as H2L) has unveiled a new product called “Capsule Interface,” which transmits whole-body movements and strength, enabling new shared experiences with robots and avatars. A product introduction video depicting a synchronization never before experienced by humans was also released.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="cd7cebba8a246b0d342bfa262937b917" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8a46Uap367k?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://h2l.jp/2025/06/18/%e5%85%a8%e8%ba%ab%e3%83%aa%e3%82%a2%e3%83%ab%e4%bd%93%e9%a8%93%ef%bc%81%e8%a6%8b%e3%82%8b%e8%81%9e%e3%81%8f%e3%81%ae%e5%85%88%e3%82%92%e5%89%b5%e3%82%8b%e3%80%82%e5%8b%95%e3%81%8d%e3%81%a8%e5%8a%9b/">H2L Corp.</a>] via [<a href="https://robotstart.info/2025/06/18/h2l-capsule-interface-launch.html">RobotStart</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="vbugenau3re">How do you keep a robot safe without requiring it to look at you? <a data-linked-post="2668807636" href="https://spectrum.ieee.org/feral-cat-radar-detector" target="_blank">Radar</a>!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6b6cefa1aff74634e83e5435745c35bc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/vbuGenAu3rE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://ieeexplore.ieee.org/document/11037369">Paper</a>] via [<a href="https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7361" target="_blank">IEEE Sensors Journal</a>]</p><p>Thanks, Bram!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pcank-5e-qo"><em>We propose Aerial Elephant Trunk, an aerial continuum manipulator inspired by the elephant trunk, featuring a small-scale quadrotor and a dexterous, compliant tendon-driven continuum arm for versatile operation in both indoor and outdoor settings.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4a2829d05541e793bcf454fe8f2c394d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/PcanK-5e-qo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://arclab.hku.hk/">Adaptive Robotics Controls Lab</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="w_qloi1pokw"><em>This video demonstrates a heavy weight lifting test using the ARMstrong Dex robot, focusing on a 40 kg bicep curl motion. ARMstrong Dex is a human-sized, dual-arm hydraulic robot currently under development at the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. Designed to perform tasks flexibly like a human while delivering high power output, ARMstrong Dex is capable of handling complex operations in hazardous environments.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ed3765ff0740c1a013e9beeaae04aa6a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/W_QlOi1PoKw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.kaeri.re.kr/eng/">Korea Atomic Energy Research Institute</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="lfat803dqmw"><em>Micro-robots that can inspect water pipes, diagnose cracks, and fix them autonomously—reducing leaks and avoiding expensive excavation work—have been developed by a team of engineers led by the University of Sheffield. </em></blockquote><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="204230bab58cc85ff8909e3f3029be79" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Q2loVe5_NcE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://www.sheffield.ac.uk/news/tiny-robots-could-help-fix-leaky-water-pipes">University of Sheffield</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="lfat803dqmw"><em>We’re growing in size, scale, and impact! We’re excited to announce the opening of our serial production facility in the San Francisco Bay Area, the very first purpose-built <a data-linked-post="2650275419" href="https://spectrum.ieee.org/secretive-robotaxi-startup-zoox-prepares-for-realworld-testing" target="_blank">robotaxi</a> assembly facility in the United States. More space means more innovation, production, and opportunities to scale our fleet.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="504fa629f2368f24a9acf389d3e12a66" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/lfAt803DQMw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://zoox.com/">Zoox</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8-0d4lyjhqi"><em>Watch multipick in action as our pickle robot rapidly identifies, picks, and places multiple boxes in a single swing of an arm.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="36412a3fdb34a606e69841ccca3c2110" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8-0d4LyJhQI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://picklerobot.com/">Pickle</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="un_7inyazyq">And now, this.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ec349bce0e0fb055b052b7831af4f49b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/uN_7INYaZYQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://info.aibo.sony.jp/info/2024/12/creatorschallenge2025.html">Aibo</a>]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="aqrqusrezhy"><em>Cargill’s Amsterdam Multiseed facility enlists Spot and Orbit to inspect machinery and perform visual checks, enhanced by all-new AI features, as part of their “Plant of the Future” program. </em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b7c681340545a8e6334c9f64aa9dadd4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AqRquSReZHY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://bostondynamics.com/products/spot/">Boston Dynamics</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="vmsslbgktcu">This ICRA 2025 plenary talk is from Raffaello D’Andrea, entitled “Models are Dead, Long Live Models!”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d1600ac97242ef165c3fc6d7e438f123" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/vMSSlBGKtCU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://2025.ieee-icra.org/program/plenary-sessions/">ICRA 2025</a>]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="pfvctjompk8">Will data solve robotics and automation? Absolutely! Never! Who knows?! Let’s argue about it!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2e5008c9444113b6baa43e3fdeee54ed" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/PfvctjoMPk8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[<a href="https://2025.ieee-icra.org/announcements/event-overview-for-thursday-may-22/">ICRA 2025</a>]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 20 Jun 2025 16:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-jet-powered-robot</guid><category>Video friday</category><category>Robotics</category><category>Humanoid robots</category><category>Industrial robots</category><category>Aibo</category><category>Dexterous</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/close-up-of-a-humanoid-robot-showing-intricate-mechanical-components-and-wiring-with-small-jet-engines-on-its-arms-and-torso.jpg?id=61079219&width=980"></media:content></item><item><title>Video Friday: AI Model Gives Neo Robot Autonomy</title><link>https://spectrum.ieee.org/video-friday-neo-humanoid-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robot-and-person-standing-face-to-face-in-a-wooden-hallway-with-tall-bushy-plants.png?id=60988606&width=1245&height=700&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://www.edrcoalition.com/2025-energy-drone-robotics-summit">2025 Energy Drone & Robotics Summit</a>: 16–18 June 2025, HOUSTON</h5><h5><a href="https://roboticsconference.org/">RSS 2025</a>: 21–25 June 2025, LOS ANGELES</h5><h5><a href="https://robotx.ethz.ch/education/summer-school.html">ETH Robotics Summer School</a>: 21–27 June 2025, GENEVA</h5><h5><a href="https://ias-19.org/">IAS 2025</a>: 30 June–4 July 2025, GENOA, ITALY</h5><h5><a href="https://clawar.org/icres2025/">ICRES 2025</a>: 3–4 July 2025, PORTO, PORTUGAL</h5><h5><a href="https://2025.worldhaptics.org/">IEEE World Haptics</a>: 8–11 July 2025, SUWON, SOUTH KOREA</h5><h5><a href="https://ifac2025-msrob.com/">IFAC Symposium on Robotics</a>: 15–18 July 2025, PARIS</h5><h5><a href="https://2025.robocup.org/">RoboCup 2025</a>: 15–21 July 2025, BAHIA, BRAZIL</h5><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN, CHINA</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="qnzl5dvtdkk">Introducing Redwood—1X’s breakthrough <a data-linked-post="2671886754" href="https://spectrum.ieee.org/chain-of-thought-prompting" target="_blank">AI model</a> capable of doing chores around the home. For the first time, <a data-linked-post="2671238743" href="https://spectrum.ieee.org/video-friday-good-over-all-terrains" target="_blank">NEO Gamma</a> moves, understands, and interacts autonomously in complex human environments. Built to learn from real-world experiences, Redwood empowers NEO to perform end-to-end mobile manipulation tasks like retrieving objects for users, opening doors, and navigating around the home gracefully, on top of hardware designed for compliance, safety, and resilience.</blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ebbfc0b339e850cd28b7e5f5fac9c43c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/qnzL5dVTDKk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="a2f3f4b5f15fde9ec50d5116b96b764a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Dp6sqx9BGZs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">- YouTube</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://www.youtube.com/watch?v=Dp6sqx9BGZs" target="_blank">www.youtube.com</a></small></p><p>[ <a href="https://www.1x.tech/discover/redwood-ai">1X Technology</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="7gap8k9jz2q"><a data-linked-post="2650251927" href="https://spectrum.ieee.org/therapeutic-robots-paro-and-keepon-are-cute-but-still-costly" target="_blank">Marek Michalowski</a>, who co-created <a data-linked-post="2650276785" href="https://spectrum.ieee.org/keepon-helps-kids-learn-to-argue-better" target="_blank">Keepon</a>, has not posted to his YouTube channel in 17 years—until this week. The new post? It’s about a project from 10 years ago!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f79c76127d0315323179d7d47ac57117" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/7gAp8k9jZ2Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://jonathanproto.com/project-sundial">Project Sundial</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="lkc2y0yb89u"><em>Helix can now handle a wider variety of packaging approaching human-level dexterity and speed, bringing us closer to fully autonomous package sorting. This rapid progress underscores the scalability of Helix’s learning-based approach to robotics, translating quickly into real-world applications.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d3d0673b70a96eaa2d77f39e3bf16d02" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/lkc2y0yb89U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.figure.ai/news/helix">Figure</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="nzuvdu2q0zo">This is certainly an atypical Video Friday selection, but I saw this Broadway musical called “Maybe Happy Ending” a few months ago because the main characters are deprecated humanoid home-service robots. It was utterly charming, and it just won the Tony award for best new musical among others.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e25d5605adc019d20ed170c527ed4700" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/nZUVDu2q0Zo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ “<a href="https://www.maybehappyending.com/">Maybe Happy Ending</a>” ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="ptydwp9utis"><a data-linked-post="2664552687" href="https://spectrum.ieee.org/boston-dynamics-dancing-robots" target="_blank">Boston Dynamics</a> brought a bunch of Spots to “America’s Got Talent,” and kudos to them for recovering so gracefully from an on-stage failure.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7ec3c1744aa63eeb5904fa3ee0779adc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/ptYDWP9uTis?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://bostondynamics.com/products/spot/">Boston Dynamics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="41xnc4mu-hs">I think this is the first time I’ve seen end-effector changers used for either feet or heads.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1a29c3d692365d22f63dade5335a6c2f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/41XNc4Mu-hs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://unit.aist.go.jp/jrl-22022/en/">CNRS-AIST Joint Robotics Laboratory</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="tbdtcwzfeiu"><em>ChatGPT has gone fully Navrim—complete with existential dread and maximum gloom! Watch as the most pessimistic ChatGPT-powered robot yet moves chess pieces across a physical board, deeply contemplating both chess strategy and the futility of existence. Experience firsthand how seamlessly AI blends with robotics, even if Navrim insists there’s absolutely no point.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="08d839d8c7f07846798dcb0748af4c05" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/TbDTCwzFeIU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>Not bad for $219 all in.</p><p>[ <a href="https://vassarrobotics.com/">Vassar Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="9u0hocl0aj4"><em>We present a single-layer multimodal sensory skin made using only a highly sensitive hydrogel membrane. Using electrical impedance tomography techniques, we access up to 863,040 conductive pathways across the membrane, allowing us to identify at least six distinct types of multimodal stimuli, including human touch, damage, multipoint insulated presses, and local heating. To demonstrate our approach’s versatility, we cast the hydrogel into the shape and size of an adult human hand.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="30c9c1e376a765bd37eb45ea36471e1c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/9U0hoCL0aJ4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dshardman.co.uk/publication/scirohand/">Bio-Inspired Robotics Laboratory</a> ] paper published by [ <a href="https://www.science.org/journal/scirobotics" target="_blank">Science Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pqdqamtjwrw"><em>This paper introduces a novel robot designed to exhibit two distinct modes of mobility: rotational aerial flight and terrestrial locomotion. This versatile robot comprises a sturdy external frame, two motors, and a single wing embodying its fuselage. The robot is capable of vertical takeoff and landing in mono-wing flight mode, with the unique ability to fly in both clockwise and counterclockwise directions, setting it apart from traditional mono-wings.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="fea3e49bf322eec7899d20a2236783a4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/PQDqAMTjWrw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://journals.sagepub.com/doi/10.1177/02783649251344968">AIR Lab</a> paper ] published in [ <a href="https://journals.sagepub.com/home/ijra" target="_blank">The International journal of Robotics Research</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="hzq2hhoi6la">When TRON 1 goes to work, all he does is steal snacks from hoomans. Apparently.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="8d3121ce94715e6d531c92002d4575b7" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/hZQ2hhoi6lA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en/tron1">LimX Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jb2txzph7xs"><em>The 100,000th robot has just rolled off the line at Pudu Robotics’ Super Factory! This key milestone highlights our cutting-edge manufacturing strength and marks a global shipment volume of over 100,000 units delivered worldwide.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5bc0b47f42e246bf950808b9fc53d28e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Jb2tXzph7Xs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.pudurobotics.com/en">Pudu Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="cn5whdtrlv0">Now that is a big saw.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e4d5683d9d16f4c931c72693ecd4b188" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/CN5WhDTRlV0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.kuka.com/en-se/industries/solutions-database/2025/05/catonator_smartproduction-nordic">Kuka Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="2rih4zintzm"><em>NASA Jet Propulsion Laboratory has developed the Exploration Rover for Navigating Extreme Sloped Terrain or ERNEST. This rover could lead to a new class of low-cost planetary rovers for exploration of previously inaccessible locations on Mars and the moon.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ccaf709dda32b8b5778c7a0f178c877e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/2RiH4ZInTZM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.hou.usra.edu/meetings/lpsc2025/pdf/1729.pdf">NASA Jet Propulsion Laboratory</a> paper ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="zobe3aoz5fw"><em>Brett Adcock, founder and CEO of Figure AI, speaks with Bloomberg Television’s Ed Ludlow about how the company is training humanoid robots for logistics, manufacturing, and future roles in the home at Bloomberg Tech in San Francisco.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2467bd8c8c1fc278a40446b7ee5655e9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/zObe3aOz5fw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.figure.ai/">Figure</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="b6pz2r1uhxw"><em>Peggy Johnson, CEO of Agility Robotics, discusses how humanoid robots like Digit are transforming logistics and manufacturing. She speaks with Bloomberg Businessweek’s Brad Stone about the rapid advances in automation and the next era of robots in the workplace at Bloomberg Tech in San Francisco.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d7a77a584e471d9a057897f5a2a150d7" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/B6pz2R1UHXw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.agilityrobotics.com/">Agility Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="piyyufkvg1e">This ICRA 2025 Plenary is from Allison Okamura, titled “Rewired: The Interplay of Robots and Society.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c721b1931ac1615ffee7f918db1f8861" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/PIYyufKvG1E?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://2025.ieee-icra.org/program/plenary-sessions/">ICRA 2025</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 13 Jun 2025 16:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-neo-humanoid-robot</guid><category>Video friday</category><category>Humanoid robots</category><category>Autonomous robots</category><category>Dexterity</category><category>Dancing robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robot-and-person-standing-face-to-face-in-a-wooden-hallway-with-tall-bushy-plants.png?id=60988606&width=980"></media:content></item><item><title>Navigating the Dual-Use Dilemma</title><link>https://spectrum.ieee.org/navigating-the-dual-use-dilemma</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robotic-arm-holding-a-scalpel-merging-into-a-digital-blueprint-on-a-black-and-white-background.png?id=60656210&width=1245&height=700&coordinates=0%2C65%2C0%2C65"/><br/><br/><p>Open-source technology developed in the civilian sector has the capacity to also be used in military applications or be simply misused. Navigating this <a href="https://link.springer.com/article/10.1007/s11948-009-9159-9" rel="noopener noreferrer" target="_blank">dual-use</a> potential is becoming more important across engineering fields, as innovation goes both ways. While the “openness” of open-source technology is part of what drives innovation and allows everyone access, it also, unfortunately, means it’s just as easily accessible to others, including the military and criminals.</p><p>What happens when a rogue state, a nonstate militia, or a school shooter displays the same creativity and innovation with open-source technology that engineers do? This is the question we are discussing here: How can we uphold our principles of open research and innovation to drive progress while mitigating the inherent risks that come with accessible technology?</p><p>More than just open-ended risk, let’s discuss the specific challenges open-source technology and its dual-use potential have on robotics. Understanding these challenges can help engineers learn what to look for in their own disciplines.</p><h2>The Power and Peril of Openness</h2><p>Open-access publications, software, and educational content are fundamental to advancing robotics. They have democratized access to knowledge, enabled reproducibility, and fostered a vibrant, collaborative international community of scientists. Platforms like arXiv and GitHub and open-source initiatives like the <a href="https://www.ros.org/" rel="noopener noreferrer" target="_blank">Robot Operating System</a> (ROS) and the <a href="https://github.com/open-dynamic-robot-initiative/" rel="noopener noreferrer" target="_blank">Open Dynamic Robot Initiative</a> have been pivotal in accelerating robotics research and innovation, and there is no doubt that they should remain openly accessible. Losing access to these resources would be devastating to the robotics field.</p><p>However, robotics carries inherent dual-use risks since most robotics technology can be repurposed <a href="https://spectrum.ieee.org/autonomous-weapons-challenges" target="_blank">for military use</a> or <a href="https://spectrum.ieee.org/why-you-should-fear-slaughterbots-a-response" target="_blank">harmful purposes</a>. One recent example of custom-made drones in current conflicts is particularly insightful. The resourcefulness displayed by Ukrainian soldiers in repurposing and sometimes <a href="https://www.cnas.org/publications/reports/evolution-not-revolution" rel="noopener noreferrer" target="_blank">augmenting civilian drone technology</a> received worldwide, often admiring, news coverage. Their creativity has been made possible through the affordability of commercial drones, spare parts, 3D printers, and the availability of open-source software and hardware. This allows people with little technological background and money to easily create, control, and repurpose robots for military applications. One can certainly argue that this has had an empowering effect on Ukrainians defending their country. However, these same conditions also present opportunities for a wide range of potential bad actors.</p><p>Openly available knowledge, designs, and software can be misused to enhance existing weapons systems with capabilities like vision-based <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12663" rel="noopener noreferrer" target="_blank">navigation, autonomous targeting, or swarming</a>. Additionally, unless proper security measures are taken, the public nature of open-source code makes it vulnerable to cyberattacks, potentially allowing malicious actors to gain control of robotic systems and cause them to malfunction or be used for <a href="https://www.sciencedirect.com/science/article/pii/S2667305323000625" rel="noopener noreferrer" target="_blank">malevolent purposes</a>. Many ROS users already recognize that they do not invest enough in <a href="https://aliasrobotics.com/files/robot_cybersecurity_review.pdf" rel="noopener noreferrer" target="_blank">cybersecurity</a> for their applications.</p><h2>Guidance Is Necessary</h2><p>Dual-use risks stemming from openness in research and innovation are a concern for many engineering fields. Did you know that engineering was originally a military-only activity? The word “engineer” was coined in the Middle Ages to describe “a designer and constructor of fortifications and weapons.” Some engineering specializations, especially those that include the development of weapons of mass destruction (chemical, biological, radiological, and nuclear), have developed clear guidance, and in some cases, regulations for how research and innovation can be conducted and disseminated. They also have community-driven processes intended to mitigate dual-use risks associated with spreading knowledge. For instance, BioRxiv and MedRxiv—the preprint servers for biology and health sciences—screen submissions for material that poses a biosecurity or health risk before publishing them.</p><p>The field of robotics, in comparison, offers no specific regulation and little guidance as to how roboticists should think of and address the risks associated with openness. Dual-use risk is not taught in most universities, despite it being something that students will likely face in their careers, such as when assessing whether their work is subject to <a href="https://www.sipri.org/publications/2020/policy-reports/responsible-artificial-intelligence-research-and-innovation-international-peace-and-security" rel="noopener noreferrer" target="_blank">export-control regulations on dual-use items</a>.</p><p>As a result, roboticists may not feel they have an incentive or are equipped to evaluate and mitigate the dual-use risks associated with their work. This represents a major problem, as the likelihood of harm associated with the misuse of open robotic research and innovation is likely higher than that of nuclear and biological research, both of which require significantly more resources. Producing “do-it-yourself” robotic weapon systems using open-source design and software and off-the-shelf commercial components is relatively easy and accessible. With this in mind, we think that it’s high time for the robotics community to work toward its own set of sector-specific guidance for how researchers and companies can best navigate the dual-use risks associated with the open diffusion of their work.</p><h2>A Road Map for Responsible Robotics</h2><p>Striking a balance between security and openness is a complex challenge, but one that the robotics community must embrace. We cannot afford to stifle innovation, nor can we ignore the potential for harm. A proactive, multipronged approach is needed to navigate this dual-use dilemma. Drawing lessons from other fields of engineering, we propose a road map focusing on four key areas: education, incentives, moderation, and red lines.</p><h3>Education</h3><p>Integrating responsible research and innovation into robotics education at all levels is paramount. This includes not only dedicated courses but also the <a href="https://journals.uclpress.co.uk/lre/article/id/129/" rel="noopener noreferrer" target="_blank">systematic inclusion</a> of dual-use and cybersecurity considerations within core <a href="https://link.springer.com/article/10.1007/s11948-019-00164-6" rel="noopener noreferrer" target="_blank">robotics curricula</a>. We must foster a culture of responsible innovation so that we can empower roboticists to make informed decisions and proactively address potential risks.</p><p>Educational initiatives could include:</p><ul><li>Developing and disseminating open-source educational materials on responsible robotics for robotics teachers, researchers, and professionals from resources such as the <a href="https://disarmament.unoda.org/responsible-innovation-ai/resources/" rel="noopener noreferrer" target="_blank">United Nations Office for Disarmament Affairs</a> (UNODA) and the <a href="https://airesponsibly.net/education/" rel="noopener noreferrer" target="_blank">Center for Responsible AI</a> at New York University. </li><li>Organizing workshops and seminars on dual-use and ethical considerations at robotics conferences and universities.</li><li>Encouraging universities to offer courses or modules dedicated to <a href="https://journals.sagepub.com/doi/10.1177/20539517231219958" rel="noopener noreferrer" target="_blank">responsible research and innovation in robotics</a>.</li></ul><h3>Incentives</h3><p>Everyone should be encouraged to assess the potential negative consequences of making their work fully or partially open. Funding agencies can mandate risk assessments as a condition for project funding, signaling their importance. Professional organizations, like the <a href="https://www.ieee-ras.org/" rel="noopener noreferrer" target="_blank">IEEE Robotics and Automation Society</a> (RAS), can adopt and promote <a href="https://www.ieee-ras.org/industry-government/standards" rel="noopener noreferrer" target="_blank">best practices</a>, providing tools and frameworks for researchers to identify, assess, and mitigate risks. Such tools could include self-assessment checklists for individual researchers and guidance for how faculties and labs can set up ethical review boards. Academic journals and conferences can make peer-review risk assessments an integral part of the publication process, especially for high-risk applications.</p><p>Additionally, incentives like awards and recognition programs can highlight exemplary contributions to risk assessment and mitigation, fostering a culture of responsibility within the community. Risk assessment can also be encouraged and rewarded in more informal ways. People in leadership positions, such as Ph.D. supervisors and heads of labs, could build ad hoc opportunities for students and researchers to discuss possible risks. They can hold seminars on the topic and provide introductions to external experts and stakeholders like social scientists and experts from NGOs.</p><h3>Moderation</h3><p>The robotics community can implement <a href="https://dl.acm.org/doi/10.1145/3593013.3593981" rel="noopener noreferrer" target="_blank">self-regulation mechanisms</a> to moderate the diffusion of high-risk material. This could involve:</p><ul><li>Screening work prior to publication to prevent the dissemination of content posing serious risks.</li><li>Implementing graduated access controls (“gating”) to certain source code or data on open-source repositories, potentially requiring users to identify themselves and specify their intended use.</li><li>Establishing clear guidelines and community oversight to ensure transparency and prevent misuse of these moderation mechanisms. For example, organizations like RAS could design categories of risk levels for robotics research and applications and create a monitoring committee to track and document real cases of the misuse of robotics research to understand and visualize the scale of the risks and create better mitigation strategies.</li></ul><h3>Red Lines</h3><p>The robotics community should also seek to define and enforce red lines for the development and deployment of robotics technologies. Efforts to define red lines have already been made in that direction, notably in the context of the <a href="https://standards.ieee.org/industry-connections/ec/autonomous-systems/" rel="noopener noreferrer" target="_blank">IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems</a>. Companies, including <a href="https://bostondynamics.com/" rel="noopener noreferrer" target="_blank">Boston Dynamics</a>, <a href="https://www.unitree.com/" rel="noopener noreferrer" target="_blank">Unitree</a>, <a href="https://www.agilityrobotics.com/" rel="noopener noreferrer" target="_blank">Agility Robotics</a>, <a href="https://clearpathrobotics.com/" rel="noopener noreferrer" target="_blank">Clearpath Robotics</a>, <a href="https://www.anybotics.com/" rel="noopener noreferrer" target="_blank">ANYbotics</a>, and <a href="https://www.openrobotics.org/" rel="noopener noreferrer" target="_blank">Open Robotics</a> wrote an open letter calling for regulations on the <a href="https://bostondynamics.com/news/general-purpose-robots-should-not-be-weaponized/" rel="noopener noreferrer" target="_blank">weaponization of general-purpose robots</a>. Unfortunately, their efforts were very narrow in scope, and there is a lot of value in further mapping end uses of robotics that should be deemed off-limits or demand extra caution.</p><p>It will absolutely be difficult for the community to agree on standard red lines, because what is considered ethically acceptable or problematic is highly subjective. To support the process, individuals and companies can reflect on what they consider to be unacceptable use of their work. This could result in policies and terms of use that beneficiaries of open research and open-source design software would have to formally agree to (such as specific-use open-source licenses). This would provide a basis for revoking access, denying software updates, and potentially suing or blacklisting people who misuse the technology. Some companies, including Boston Dynamics, have already implemented these measures to some extent. Any person or company conducting open research could replicate this example.</p><p>Openness is the key to innovation and the democratization of many engineering disciplines, including robotics, but it also amplifies the potential for misuse. The engineering community has a responsibility to proactively address the dual-use dilemma. By embracing responsible practices, from education and risk assessment to moderation and red lines, we can foster an ecosystem where openness and security coexist. The challenges are significant, but the stakes are too high to ignore. It is crucial to ensure that research and innovation benefit society globally and do not become a driver of instability in the world. This goal, we believe, aligns with the mission of the IEEE, which is to “advance technology for the benefit of humanity.” The engineering community, especially roboticists, needs to be proactive on these issues to prevent any backlash from society and to preempt potentially counterproductive measures or international regulations that could harm open science.</p>]]></description><pubDate>Tue, 10 Jun 2025 13:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/navigating-the-dual-use-dilemma</guid><category>Robotics</category><category>Guest articles</category><category>Dual-use</category><dc:creator>Vincent Boulanin</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robotic-arm-holding-a-scalpel-merging-into-a-digital-blueprint-on-a-black-and-white-background.png?id=60656210&width=980"></media:content></item><item><title>Video Friday: Hopping on One Robotic Leg</title><link>https://spectrum.ieee.org/video-friday-one-legged-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/black-stick-figures-in-a-skating-pose-scattered-across-a-vast-white-icy-landscape.png?id=60524616&width=1245&height=700&coordinates=69%2C0%2C69%2C0"/><br/><br/><p>
<span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span>
</p><h5><a href="https://www.edrcoalition.com/2025-energy-drone-robotics-summit">2025 Energy Drone & Robotics Summit</a>: 16–18 June 2025, HOUSTON</h5><h5><a href="https://roboticsconference.org/">RSS 2025</a>: 21–25 June 2025, LOS ANGELES</h5><h5><a href="https://robotx.ethz.ch/education/summer-school.html">ETH Robotics Summer School</a>: 21–27 June 2025, GENEVA</h5><h5><a href="https://ias-19.org/">IAS 2025</a>: 30 June–4 July 2025, GENOA, ITALY</h5><h5><a href="https://clawar.org/icres2025/">ICRES 2025</a>: 3–4 July 2025, PORTO, PORTUGAL</h5><h5><a href="https://2025.worldhaptics.org/">IEEE World Haptics</a>: 8–11 July 2025, SUWON, SOUTH KOREA</h5><h5><a href="https://ifac2025-msrob.com/">IFAC Symposium on Robotics</a>: 15–18 July 2025, PARIS</h5><h5><a href="https://2025.robocup.org/">RoboCup 2025</a>: 15–21 July 2025, BAHIA, BRAZIL</h5><h5><a href="https://www.ro-man2025.org/">RO-MAN 2025</a>: 25–29 August 2025, EINDHOVEN, NETHERLANDS</h5><h5><a href="https://clawar.org/clawar2025/">CLAWAR 2025</a>: 5–7 September 2025, SHENZHEN</h5><h5><a href="https://www.corl.org/">CoRL 2025</a>: 27–30 September 2025, SEOUL</h5><h5><a href="https://2025humanoids.org/">IEEE Humanoids</a>: 30 September–2 October 2025, SEOUL</h5><h5><a href="https://worldrobotsummit.org/en/">World Robot Summit</a>: 10–12 October 2025, OSAKA, JAPAN</h5><h5><a href="https://www.iros25.org/">IROS 2025</a>: 19–25 October 2025, HANGZHOU, CHINA</h5><p>
Enjoy today’s videos!
</p><div class="horizontal-rule">
</div><p class="rm-anchors" id="fnzdkxl-jj0">
This single-leg robot is designed to “form a foundation for future bipedal robot development,” but personally, I think it’s perfect as is.
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="e263fb0233d0bb0d075d93a40d651be2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/FNzdKXl-jj0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://dynamicrobot.kaist.ac.kr/">KAIST Dynamic Robot Control and Design Lab</a> ]
</p><div class="horizontal-rule">
</div><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="8a6d56b1cad95583679b96d5194dd022" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/wzYtsJwYfTM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
Selling 17,000
<a data-linked-post="2655919083" href="https://spectrum.ieee.org/social-robots-children" target="_blank">social robots</a> still amazes me. <a data-linked-post="2650251656" href="https://spectrum.ieee.org/aldebaran-robotics-seeking-betatesters-for-its-nao-humanoid-robot" target="_blank">Aldebaran</a> will be missed.
</p><p>
[
<a href="https://aldebaran.com/en/">Aldebaran</a> ]
</p><div class="horizontal-rule">
</div><p class="rm-anchors" id="udti_d_vif0">
Nice to see some actual challenging shoves as part of biped testing.
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="397e23922e40f8dda09c9558813c3604" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/UdtI_D_vIF0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://www.ucr.bot/">Under Control Robotics</a> ]
</p><div class="horizontal-rule">
</div><blockquote class="rm-anchors" id="j5cfeee5pyi">
<em>Ground Control made multilegged waves at IEEE’s International Conference on Robotics and Automation 2025 in Atlanta! We competed in the Startup Pitch Competition and demoed our robot at our booth, on NIST standard terrain, and around the convention. We were proud to be a finalist for Best Expo Demo and participate in the Robot Parade.</em>
</blockquote><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="f805b79697328de135747f04c5a7dac1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/J5cfeEe5pyI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://groundcontrolrobotics.com/">Ground Control Robotics</a> ]
</p><p>
Thanks, Dan!
</p><div class="horizontal-rule">
</div><blockquote class="rm-anchors" id="agrtswo4snw">
<em>Humanoid is a U.K.-based robotics innovation company dedicated to building commercially scalable, reliable and safe robotic solutions for real-world applications.</em>
</blockquote><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="c6f2dda46adfe06e68b0b4b335ec3291" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AgrTSWO4Snw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
It’s a nifty bootup screen, I’ll give them that.
</p><p>
[
<a href="https://thehumanoid.ai/product/">Humanoid</a> ]
</p><p>
Thanks, Kristina!
</p><div class="horizontal-rule">
</div><blockquote class="rm-anchors" id="plm9gaq1jxo">
<em>Quadrupedal robots have demonstrated remarkable agility and robustness in traversing complex terrains. However, they remain limited in performing object interactions that require sustained contact. In this work, we present LocoTouch, a system that equips quadrupedal robots with tactile sensing to address a challenging task in this category: long-distance transport of unsecured cylindrical objects, which typically requires custom mounting mechanisms to maintain stability.</em>
</blockquote><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="5209d97768c506bd070b00ce7aa8e8b2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pLm9gaQ1JXo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://linchangyi1.github.io/LocoTouch/">LocoTouch paper</a> ]
</p><p>
Thanks, Changyi!
</p><div class="horizontal-rule">
</div><blockquote class="rm-anchors" id="2lg-4mdx210">
<em>In this video, Digit is performing tasks autonomously using a whole-body controller for mobile manipulation. This new controller was trained in simulation, enabling Digit to execute tasks while navigating new environments and manipulating objects it has never encountered before.</em>
</blockquote><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="4d2772b70353c22ada366d8040940a1a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/2lG-4mdx210?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
Not bad, although it’s worth pointing out that those shelves are not representative of any market I’ve ever been to.
</p><p>
[
<a href="https://www.agilityrobotics.com/">Agility Robotics</a> ]
</p><div class="horizontal-rule">
</div><p class="rm-anchors" id="xwmwmhrt-fs">
It’s always cool to see robots presented as an incidental solution to a problem as opposed to, you know, robots.
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="686a7d77fbda850290710efc6140a527" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/xWmWmhRt-fs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
The question that you really want answered, though, is “Why is there water on the floor?”
</p><p>
[
<a href="https://bostondynamics.com/products/orbit/">Boston Dynamics</a> ]
</p><div class="horizontal-rule">
</div><blockquote class="rm-anchors" id="gqidyj-akaa">
<em>Reinforcement learning (RL) has significantly advanced the control of physics-based and robotic characters that track kinematic reference motion. We propose a multi-objective reinforcement learning framework that trains a single policy conditioned on a set of weights, spanning the Pareto front of reward trade-offs. Within this framework, weights can be selected and tuned after training, significantly speeding up iteration time. We demonstrate how this improved workflow can be used to perform highly dynamic motions with a robot character.</em>
</blockquote><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="daa86fab0c3d3f61cc1ab142a8056ca3" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/gQidYj-AKaA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://la.disneyresearch.com/publication/amor-adaptive-character-control-through-multi-objective-reinforcement-learning/">Disney Research</a> ]
</p><div class="horizontal-rule">
</div><p class="rm-anchors" id="igyjdvu2tc0">
It’s been a week since ICRA 2025, and TRON 1 already misses all the new friends it made!
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="891a8f3fed5dda5103e1a2056cef57e4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/iGyJdVu2tc0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]
</p><div class="horizontal-rule">
</div><p class="rm-anchors" id="hjpps5vcftg">
ROB 450 in Winter 2025 challenged students to synthesize the knowledge acquired through their Robotics undergraduate courses at the University of Michigan to use a systematic and iterative design and analysis process and apply it to solving a real open-ended Robotics problem.
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="cb4df45971f7989fef2eafcf4708c497" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/hjPPS5vcFtg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://robotics.umich.edu/">University of Michigan Robotics</a> ]
</p><div class="horizontal-rule">
</div><p class="rm-anchors" id="hh7fh5ys82q">
What’s the Trick? A talk on human vs. current robot learning, given by Chris Atkeson at the Robotics and AI Institute.
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="ad0b49c258ded8012bd36ea093692f33" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/hh7Fh5YS82Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
</p><p>
[
<a href="https://rai-inst.com/">Robotics and AI Institute (RAI)</a> ]
</p><div class="horizontal-rule">
</div>]]></description><pubDate>Fri, 06 Jun 2025 16:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-one-legged-robot</guid><category>Video friday</category><category>Robotics</category><category>Humanoid robots</category><category>Aldebaran robotics</category><category>Reinforcement learning</category><category>Quadruped robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/black-stick-figures-in-a-skating-pose-scattered-across-a-vast-white-icy-landscape.png?id=60524616&width=980"></media:content></item><item><title>Look for These 7 New Technologies at the Airport</title><link>https://spectrum.ieee.org/7-new-airport-tech</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/line-drawing-of-a-woman-walking-into-an-airport-and-rolling-carryon-luggage-as-she-checks-her-travel-itinerary-on-a-cell-phone.png?id=60389585&width=1245&height=700&coordinates=0%2C115%2C0%2C115"/><br/><br/><p><strong>Take a look around</strong> the airport during your travels this summer and you might spot a string of new technologies at every touchpoint: from pre-arrival, bag drop, and security to the moment you board the plane.</p><p>In this new world, your face is your boarding pass, your electronic luggage tag transforms itself for each new flight, and gate scanners catch line cutters trying to sneak onto the plane early.</p><p>It isn’t the future—it’s now. Each of the technologies to follow is in use at airports around the world today, transforming your journey-before-the-journey.</p><h2>Virtual queuing speeds up airport security</h2><p>As you pack the night before your trip, you ponder the age-old travel question: What time should I get to the airport? The right answer requires predicting the length of the security line. But at some airports, you no longer have to guess; in fact, you don’t have to wait in line at all.</p><p>Instead, you can book ahead and choose a specific time for your security screening—so you can arrive right before your reserved slot, confident that you’ll be whisked to the front of the line, thanks to <a href="https://copenhagenoptimization.com/" rel="noopener noreferrer" target="_blank">Copenhagen Optimization</a>’s Virtual Queuing system.</p><p>Copenhagen Optimization’s machine learning models use linear regression, heuristic models, and other techniques to forecast the volume of passenger arrivals based on historical data. The system is integrated with airport programs to access flight schedules and passenger-flow data from boarding-pass scans, and it also takes in data from lidar sensors and cameras at security checkpoints, X-ray luggage scanners, and other areas.</p><p>If a given day’s passenger volume ends up differing from historical projections, the platform can use real-time data from these inputs to adjust the Virtual Queuing time slots—and recommend that the airport make changes to security staffing and the number of open lanes. The Virtual Queuing system is constantly adjusting to flatten the passenger arrival curve, tactically redistributing demand across time slots to optimize resources and reduce congestion.</p><p>While this system is doing the most, you as a passenger can do the least. Just book a time slot on your airport’s website or app, and get some extra sleep knowing you’ll waltz right up to the security check tomorrow morning.</p><h2>Electronic bag tags</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Line drawing of a woman lifting suitcase at airport baggage check-in with barcode in focus." class="rm-shortcode" data-rm-shortcode-id="64ff97b084fbc93dd936889921e516d7" data-rm-shortcode-name="rebelmouse-image" id="f8bea" loading="lazy" src="https://spectrum.ieee.org/media-library/line-drawing-of-a-woman-lifting-suitcase-at-airport-baggage-check-in-with-barcode-in-focus.png?id=60389664&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">MCKIBILLO</small></p><p>Checking a bag? Here’s another step you can take care of before you arrive: Skip the old-school paper tags and generate your own electronic <a href="https://bagtag.com/" target="_blank">Bagtag</a>. This e-ink device (costing about US $80, or €70) looks like a traditional luggage-tag holder, but it can generate a new, paperless tag for each one of your flights.</p><p>You provide your booking details through your airline’s app or the Bagtag app, and the Bagtag system then uses application programming interfaces and secure data protocols to retrieve the necessary information from the airline’s system: your name, flight details, the baggage you’re allowed, and the unique barcode that identifies your bag. The app uses this data to generate a digital tag. Hold your phone near your Bagtag, and it will transmit the encrypted tag data via Bluetooth or NFC. Simultaneously, your phone’s NFC antenna powers the battery-free Bagtag device.</p><p>On the Bagtag itself, a low-power microcontroller decrypts the tag data and displays the digital tag on the e-ink screen. Once you’re at the airport, the tag can be scanned at the airline’s self-service bag drop or desk, just like a traditional paper tag. The device also contains an RFID chip that’s compatible with the luggage-tracking systems that some airlines are using, allowing your bag to be identified and tracked—even if it takes a different journey than you do. When you arrive at the airport, just drop that checked bag and make your way to the security area.</p><h2>Biometric boarding passes</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="Illustration of a woman using kiosk for facial recognition ID verification." class="rm-shortcode" data-rm-shortcode-id="af8a923d85c7eac7ca6873db756cc3fb" data-rm-shortcode-name="rebelmouse-image" id="3dfdf" loading="lazy" src="https://spectrum.ieee.org/media-library/illustration-of-a-woman-using-kiosk-for-facial-recognition-id-verification.png?id=60389955&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">MCKIBILLO</small></p><p>Over at security, you’ll need your boarding pass and ID. Compared with the old days of printing a physical slip from a kiosk, digital QR code boarding passes are quite handy—but what if you didn’t need anything besides your face? That’s the premise of <a href="https://www.idemia.com/" target="_blank">Idemia Public Security</a>’s biometric boarding-pass technology.</p><p>Instead of waiting in a queue for a security agent, you’ll approach a self-service kiosk or check-in point and insert your government-issued identification document, such as a driver’s license or passport. The system uses visible light, infrared, and ultraviolet imaging to analyze the document’s embedded security features and verify its authenticity. Then, computer-vision algorithms locate and extract the image of your face on the ID for identity verification.</p><p>Next, it’s time for your close-up. High-resolution cameras within the system capture a live image of your face using 3D and infrared imaging. The system’s antispoofing technology prevents people from trying to trick the system with items like photos, videos, or masks. The technology compares your live image to the one extracted from your ID using facial-recognition algorithms. Each image is then converted into a compact biometric template—a mathematical representation of your facial features—and a similarity score is generated to confirm a match.</p><p>Finally, the system checks your travel information against secure flight databases to make sure the ticket is valid and that you’re authorized to fly that day. Assuming all checks out, you’re cleared to head to the body scanners—with no biometric data retained by Idemia Public Security’s system.</p><h2>X-rays that can tell ecstasy from eczema meds </h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Illustration of an X-ray machine scanning luggage with schematic view of interior components above." class="rm-shortcode" data-rm-shortcode-id="0ff27fb1769e9930b81f30bde1d86244" data-rm-shortcode-name="rebelmouse-image" id="9c471" loading="lazy" src="https://spectrum.ieee.org/media-library/illustration-of-an-x-ray-machine-scanning-luggage-with-schematic-view-of-interior-components-above.png?id=60389973&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">MCKIBILLO</small></p><p>While you pass through your security screening, that luggage you checked is undergoing its own screening—with a major new upgrade that can tell exactly what’s inside.</p><p>Traditional scanners use one or a few X-ray sources and work by transmission, measuring the attenuation of the beam as it passes through the bag. These systems create a 2D “shadow” image based on differences in the amount and type of the materials inside. More recently, these systems have begun using <a href="https://spectrum.ieee.org/invention-of-ct-scanner" target="_blank">computed tomography</a> to scan the bag from all directions and to reconstruct 3D images of the objects inside. But even with CT, harmless objects may look similar to dangerous materials—which can lead to false positives and also require security staff to visually inspect the X-ray images or even bust open your luggage.</p><p>By contrast, <a href="https://www.smithsdetection.com/" target="_blank">Smiths Detection</a>’s new <a href="https://spectrum.ieee.org/future-baggage-scanners-will-tell-us-what-things-are-made-of" target="_blank">X-ray diffraction</a> machines measure the molecular structure of the items inside your bag to identify the exact materials—no human review required.</p><p>The machine uses a multifocus X-ray tube to quickly scan a bag from various angles, measuring the way the radiation diffracts while switching the position of the focal spots every few microseconds. Then, it analyzes the diffraction patterns to determine the crystal structure and molecular composition of the objects inside the bag—building a “fingerprint” of each material that can much more finely differentiate threats, like explosives and drugs, from benign items.</p><p>The system’s algorithms process this diffraction data and build a 3D spatial image, which allows real-time automated screening without the need for manual visual inspection by a human. After your bag passes through the X-ray diffraction machine without incident, it’s loaded into the cargo hold. Meanwhile, you’ve passed through your own scan at security and are ready to head toward your gate.</p><h2>Airport shops with no cashiers or checkout lanes</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt='Illustration of a woman entering a store with a "Just Walk Out" shopping system.' class="rm-shortcode" data-rm-shortcode-id="995f20a20ef07fc6697d2cac5737b9c1" data-rm-shortcode-name="rebelmouse-image" id="8a4c0" loading="lazy" src="https://spectrum.ieee.org/media-library/illustration-of-a-woman-entering-a-store-with-a-just-walk-out-shopping-system.png?id=60390007&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">MCKIBILLO</small></p><p>While meandering over to your gate from security, you decide you could use a little pick-me-up. Just down the corridor is a convenience store with snacks, drinks, and other treats—but no cashiers. It’s a contactless shop that uses <a href="https://www.justwalkout.com/" target="_blank">Just Walk Out</a> technology by Amazon.</p><p>As you enter the store with the tap of a credit card or mobile wallet, a scanner reads the card and assigns you a unique session identifier that will let the Just Walk Out system link your actions in the store to your payment. Overhead cameras track you by the top of your head, not your face, as you move through the store.</p><p>The Just Walk Out system uses a deep-learning model to follow your movements and detect when you interact with items. In most cases, computer vision can identify a product you pick up simply based on the video feed, but sometimes weight sensors embedded in the shelves provide additional data to determine what you removed. The video and weight data are encoded as tokens, and a neural network processes those tokens in a way similar to how large language models encode text—determining the result of your actions to create a “virtual cart.”</p><p>While you shop, the system continuously updates this cart: adding a can of soda when you pick it up, swapping one brand of gum for another if you change your mind, or removing that bag of chips if you put it back on the shelf. Once your shopping is complete, you can indeed just walk out with your soda and gum. The items you take will make up your finalized virtual cart, and the credit card you entered the store with will be charged as usual. (You can look up a receipt, if you want.) With provisions procured, it’s onward to the gate.</p><h2>Airport-cleaning robots</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Illustration of a woman watching an automated floor cleaning robot cleaning up a spilled drink in the airport." class="rm-shortcode" data-rm-shortcode-id="910995e84aefefbcacb0afaefa6e6a37" data-rm-shortcode-name="rebelmouse-image" id="a8ced" loading="lazy" src="https://spectrum.ieee.org/media-library/illustration-of-a-woman-watching-an-automated-floor-cleaning-robot-cleaning-up-a-spilled-drink-in-the-airport.png?id=60390051&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">MCKIBILLO</small></p><p>As you amble toward the gate with your luggage and snacks, you promptly spill that soda you just bought. Cleanup in Terminal C! Along comes <a href="https://avidbots.com/" target="_blank">Avidbots’ Neo</a>, a fully autonomous floor-scrubbing robot designed to clean commercial spaces like airports with minimal human intervention.</p><p>When a Neo is first delivered to the airport, the robot performs a comprehensive scan of the various areas it will be cleaning using lidar and 3D depth cameras. Avidbots software processes the data to create a detailed map of the environment, including walls and other obstacles, and this serves as the foundation for Neo’s cleaning plans and navigation.</p><p>Neo’s human overlords can use a touchscreen on the robot to direct it to the area that needs cleaning—either as part of scheduled upkeep, or when someone (ahem) spills their soda. The robot springs into action, and as it moves, it continuously locates itself within its map and plans its movements using data from wheel encoders, inertial measurement units, and a gyroscope. Neo also updates its map and adjusts its path in real time by using the lidar and depth cameras to detect any changes from its initial mapping, such as a translocated trash can or perambulating passengers.</p><p>Then comes the scrubbing. Neo’s software plans the optimal path for cleaning a given area at this moment in time, adjusting the robot’s speed and steering as it moves along. A water-delivery system pumps and controls the flow of cleaning solution to the motorized brushes, whose speed and pressure can also be adjusted based on the surface the robot is cleaning. A powerful vacuum system collects the dirty water, and a flexible squeegee prevents slippery floors from being left behind.</p><p>While the robot’s various sensors and planning algorithms continuously detect and avoid obstacles, any physical contact with the robot’s bumpers triggers an emergency stop. And if Neo finds itself in a situation it’s just not sure how to handle, the robot will stop and call for assistance from a human operator, who can review sensor data and camera feeds remotely to help it along.</p><h2>“Wrong group” plane-boarding alarm</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="Illustration of a woman waiting in line at boarding gate E6, with notification bell icon above." class="rm-shortcode" data-rm-shortcode-id="2b01cd5c1c6b0ed8dd2962347946988a" data-rm-shortcode-name="rebelmouse-image" id="bdce3" loading="lazy" src="https://spectrum.ieee.org/media-library/illustration-of-a-woman-waiting-in-line-at-boarding-gate-e6-with-notification-bell-icon-above.png?id=60390066&width=980"/> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">MCKIBILLO</small></p><p>Your airport journey is coming to an end, and your real journey is about to begin. As you wait at the gate, you notice a fair number of your fellow passengers hovering to board even before the agent has made any announcements. And when boarding does begin, a surprising number of people hop in line. <em><em>Could all these people really be in boarding groups 1 and 2?</em></em> you wonder.</p><p>If they’re not…they’ll get called out. American Airlines’ new boarding technology stops those pesky passengers who try to join the wrong boarding group and sneak onto the plane early.</p><p>If one such passenger approaches the gate before their assigned group has been called, scanning their boarding pass will trigger an audible alert—notifying the airline crew, and everyone else for that matter. The passengers will be politely asked to wait to board. As they slink back into line, try not to look too smug. After all, it’s been a remarkably easy, tech-assisted journey through the airport today. <span class="ieee-end-mark"></span></p><p><em>This article appears in the July 2025 print issue as “A Walk Through 7 New Technologies at the Airport.”</em></p>]]></description><pubDate>Wed, 04 Jun 2025 16:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/7-new-airport-tech</guid><category>Airlines</category><category>Facial recognition</category><category>Robot cleaner</category><category>Airports</category><category>X-ray diffraction</category><dc:creator>Julianne Pepitone</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/line-drawing-of-a-woman-walking-into-an-airport-and-rolling-carryon-luggage-as-she-checks-her-travel-itinerary-on-a-cell-phone.png?id=60389585&width=980"></media:content></item><item><title>Who Gives a S#!t About Cursing Robots?</title><link>https://spectrum.ieee.org/cursing-social-robot-interaction</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-illustration-of-a-robot-tripping-on-a-banana-peel-its-head-is-covered-by-a-speech-bubble-and-symbols-representing-an-expleti.jpg?id=60333045&width=1245&height=700&coordinates=0%2C375%2C0%2C375"/><br/><br/><p>
The robots that share our public spaces today are so demure. Social robots and service robots aim to avoid offense, erring toward polite airs, positive emotions, and obedience. In some ways, this makes sense—would you really want to have a yelling match with a delivery robot in a hotel? Probably not, even if you’re in New York City and trying to absorb the local culture.
</p><p>
In other ways, this passive social robot design aligns with paternalistic standards that link assistance to subservience. Thoughtlessly following such outdated social norms in robot design may be ill-advised, since it <a href="https://pubmed.ncbi.nlm.nih.gov/37123285/" rel="noopener noreferrer" target="_blank">can help to reinforce outdated or harmful ideas</a> such as restricting people’s rights and reflecting only the needs of majority-identity users.
</p><p>
In <a href="https://osusharelab.com/" rel="noopener noreferrer" target="_blank">my robotics lab at Oregon State University</a>, <a href="https://spectrum.ieee.org/how-high-fives-help-us-get-in-touch-with-robots" target="_blank">we work with</a> <a href="https://spectrum.ieee.org/whats-the-deal-with-robot-comedy" target="_blank">a playful spirit</a> and enjoy challenging the problematic norms that are entrenched within “polite” interactions and social roles. So we decided to experiment with robots that use foul language around humans. After all, many people are using foul language more than ever in 2025. Why not let robots have a chance, too?
</p><h3>Why and How to Study Cursing Robots</h3><p>
Societal standards in the United States suggest that cursing robots would likely rub people the wrong way in most contexts, as swearing has a predominantly negative connotation. Although some past research shows that cursing <a href="https://www.researchgate.net/publication/238326248_Swearing_at_work_and_permissive_leadership_culture_When_anti-social_becomes_social_and_incivility_is_acceptable" rel="noopener noreferrer" target="_blank">can enhance team cohesion</a> and <a href="https://hrcak.srce.hr/file/159883" rel="noopener noreferrer" target="_blank">elicit humor</a>, certain members of society (such as women) are often expected to <a href="https://link.springer.com/article/10.1023/A:1022986429748" rel="noopener noreferrer" target="_blank">avoid risking offense</a> through profanity. We wondered whether cursing robots would be viewed negatively, or if they might perhaps offer benefits in certain situations.
</p><p>
We decided to study cursing robots in the context of responding to mistakes. Past work in human-robot interaction has already shown that <a href="https://ieeexplore.ieee.org/abstract/document/5453195" rel="noopener noreferrer" target="_blank">responding to error</a> (rather than ignoring it) can help robots be perceived more positively in human-populated spaces, especially in the case of personal and service robots. And one <a href="https://par.nsf.gov/biblio/10284325-perceived-agency-social-norm-violating-robot" rel="noopener noreferrer" target="_blank">study</a> found that compared to other faux pas, foul language is more forgivable in a robot.
</p><p>
With this past work in mind, we generated videos with three common types of robot failure: bumping into a table, dropping an object, and failing to grasp an object. We crossed these situations with three types of responses from the robot: no verbal reaction, a non-expletive verbal declaration, and an expletive verbal declaration. We then asked people to rate the robots on things like competence, discomfort, and likability, using standard scales in an online survey.
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="27aa73d6ea081bc41fc24b217317e021" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/hYdN5zLa07Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
<small class="image-media media-caption" placeholder="Add Photo Caption...">What If Robots Cursed? These Videos Helped Us Learn How People Feel about Profane Robots</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Video: Naomi Fitter</small></p><h3>What People Thought of Our Cursing Robots</h3><p>
On the whole, we were surprised by how acceptable swearing seemed to the study participants, especially within an initial group of Oregon State University students, but even among the general public as well. Cursing had no negative impact, and even some positive impacts, among the college students after we removed one religiously connotated curse (god***it), which seemed to be received in a stronger negative way than other cuss words.
</p><p>
In fact, university participants rated swearing robots as the <a href="https://sparqtools.org/mobility-measure/inclusion-of-other-in-the-self-ios-scale/" rel="noopener noreferrer" target="_blank">most socially close</a> and most humorous, and rated non-expletive and expletive robot reactions equivalent on social warmth, competence, discomfort, anthropomorphism, and likability scales. The general public judged non-profane and profane robots as equivalent on most scales, although expletive reactions were deemed most discomforting and non-expletive responses seemed most likable. We believe that the university students were slightly more accepting of cursing robots because of the campus’s progressive culture, where cursing is considered a peccadillo.
</p><p class="ieee-inbody-related">Related: <a href="https://spectrum.ieee.org/whats-the-deal-with-robot-comedy" target="_blank">What’s the Deal With Robot Comedy?</a></p><p>
Since experiments run solely in an online setting do not always represent real-life interactions well, we also conducted a final replication study in person with a robot that made errors while distributing goodie bags to campus community members at Oregon State, which reinforced our prior results.
</p><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="10b0ac5703e47214353a0b0843085bf0" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/DhHhh4yni1I?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
<small class="image-media media-caption" placeholder="Add Photo Caption...">Humans React to a Cursing Robot in the Wild<a href="https://www.youtube.com/@naomi_fitter" rel="noopener noreferrer" target="_blank"></a></small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Video: Naomi Fitter</small></p><p>
We have submitted this work, which represents a well-designed series of empirical experiments with interesting results and replications along the way, to several different journals and conferences. Despite consistently enthusiastic reviewer comments, no editors have yet accepted our work for publication—it seems to be the type of paper that editors are nervous to touch. Currently, the work is under review for a fourth time, for possible inclusion in the 2025 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), in a paper titled “<a href="https://arxiv.org/abs/2505.05831" rel="noopener noreferrer" target="_blank">Oh F**k! How Do People Feel About Robots That Leverage Profanity?</a>”
</p><h3>Give Cursing Robots a Chance </h3><p>
Based on our results, we think cursing robots deserve a chance! Our findings show that swearing robots would typically have little downside and some upside, especially in open-minded spaces such as university campuses. Even for the general public, reactions to errors with profanity yielded much less distaste than we expected. Our data showed that people cared more about whether robots acknowledged their error at all than whether or not they swore.
</p><p>
People do have some reservations about cursing robots, especially when it comes to comfort and likability, so thoughtfulness may be required to apply curse words at the right time. For example, just as humans do, robots should likely hold back their swear words around children and be more careful in settings that typically demand cleaner language. Robot practitioners might also consider surveying individual users about profanity acceptance as they set up new technology in personal settings—rather than letting robotic systems learn the hard way, perhaps alienating users in the process.
</p><p>
As more robots enter our day-to-day spaces, they are bound to make mistakes. How they react to these errors is important. Fundamentally, our work shows that people prefer robots that notice when a mistake has occurred and react to this error in a relatable way. And it seems that a range of styles in the response itself, from the profane to the mundane, can work well. So we invite designers to give cursing robots a chance!
</p>]]></description><pubDate>Tue, 03 Jun 2025 16:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/cursing-social-robot-interaction</guid><category>Social robots</category><category>Human robot interaction</category><dc:creator>Naomi Fitter</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-illustration-of-a-robot-tripping-on-a-banana-peel-its-head-is-covered-by-a-speech-bubble-and-symbols-representing-an-expleti.jpg?id=60333045&width=980"></media:content></item></channel></rss>
If you would like to create a banner that links to this page (i.e. this validation result), do the following:
Download the "valid RSS" banner.
Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)
Add this HTML to your page (change the image src
attribute if necessary):
If you would like to create a text link instead, here is the URL you can use:
http://www.feedvalidator.org/check.cgi?url=http%3A//feeds.feedburner.com/IeeeSpectrumRoboticsChannel