What Sparked This Post
This week’s Improving Talk by my good friend Daniel Scheufler explored Think Like a Brain: The Quiet Neuromorphic Revolution. The phrase that stuck with me was “robot see, robot do.” It immediately brought to mind a passage I recently highlighted in Temple Grandin’s book Visual Thinking, where she cautions against simply attaching a knife to a robotic arm to copy human motions. Instead, she argues that the most innovative robotic tools are designed differently—more straightforward, more effective, and tailored to the task. That connection between Daniel’s examples and Grandin’s insight sparked this reflection.
What the Talk Was About
Daniel walked us through the history of computing—from vacuum tubes to transistors, CPUs, and GPUs—before demonstrating how neuromorphic chips fit into the picture. Unlike CPUs and GPUs, neuromorphic chips operate more like the human brain: they are event-driven, adaptive, and low-power.
That architecture makes them uniquely suited for AI at the edge—a term referring to running artificial intelligence directly on devices close to where the data is generated (such as sensors, wearables, or cars), instead of relying on the cloud. This is especially powerful in scenarios where speed and power efficiency are crucial.
He shared examples such as:
- Industrial IoT
- Medical wearables
- Self-driving cars
All of these benefit from chips that respond in real-time, “at the speed of physics.”
A notable case study demonstrated a robot capable of mimicking human motion in under a millisecond, utilizing neuromorphic chips for power. Instead of programming every action, humans could guide the robot through VR, allowing the robot to learn and adapt as it progresses. That robot see, robot do capability opens new doors for safety in dangerous environments, like foundries, where human guidance can remain while human risk is reduced.
Why It Resonated With Me
Daniel’s framing lined up with Grandin’s point. If we’re talking about prosthetics—arms, legs, tools that extend human bodies—it makes sense to design robots that replicate human motion. However, if the robot is working independently, the real opportunity lies in developing tools and processes optimized for the task itself, rather than being bound to a human form.
It also resonated with ideas I’ve been reading in The Good Place and Philosophy (a book of essays exploring the show’s characters and ideas through a philosophical lens). The character Janet evolves from an algorithm into something embodied, social, and surprising. Neuromorphic chips feel like they point in that direction—machines that aren’t just crunching syntax, but behaving more like embodied participants in the world. Daniel’s talk grounded that philosophy in physics and engineering, while Grandin’s and Janet’s stories keep the focus on human meaning.
What I’m Still Thinking About
I keep circling back to the balance between mimicry and innovation. Sometimes the goal is to replicate human ability (prosthetics, remote surgery). At other times, it’s to free robots from human constraints and let them solve problems in more effective ways (robotic knives, optimized assembly). Neuromorphic chips blur those lines further, since their emergent behavior means they can both imitate and innovate in ways we haven’t fully mapped yet.
And maybe that’s the bigger question: as these chips make machines more embodied, social, and surprising, are we prepared for what it means to collaborate with them—not just program them?
Want to Watch It?
You can catch Daniel’s full talk here.






Leave a Reply