Blog
Q&A: Intel’s Robert Watts on spatial digital twins

One perplexing reality of embedded design is how difficult it is bringing highly distributed sensor data into some sort of shared relationship so companies can have a unified view of the world. This approach would ideally bring vision, radar, IoT and other sensor data into a shared place.
With all the sensor data in one place, AI systems across industrial settings and transportation infrastructure could offer greater inference capabilities, greater intelligence and even greater outcomes.
Getting to that point will rely on spatial digital twins, according to Intel AI Architect Rob Watts. He will speak on the topic at Sensors Converge 2026, running May 5-7 in Santa Clara, CA. Watts founded Intel SceneScape, an important tool that has helped developers integrate sensor data inputs. Fierce caught up with Watts prior to his keynote.
Fierce: How do you describe the main theme of your keynote, “Unlock the Next Era of AI in the Physical World with Digital Twins at Sensors Converge?” The program description says, “Spatial digital twins may become the critical layer that connects sensors, systems and decisions to unlock the next wave off innovation.”
Watts: This is a correct statement. Another way of thinking about it is the live/dynamic digital twin represents a machine understanding of the scene/world that is the synthesis of fused sensor data. That software/digital representation is used for determining higher-level actions rather than depending on a single sensor, since individual sensors can be noisy or latent.
Fierce: Nvidia talks often about digital twins, so do you think engineers are not relying on digital twins enough?
Watts: One key difference is digital twins for design and digital twins for real-time operation based on multimodal sensor data can be quite different than simulated/rendered digital twins based on USD (Universal Scene Description) format. The representation in a scene graph is the convergence point of offline and online digital twins. Most security, industrial, transportation, and other analysis is still done using point solutions that are tied to specific sensors, so live digital twins are a very nascent concept in many industries.
Fierce: Can you describe two or three requirements to move from isolated deployments to systems that scale across environments?
Watts: The key elements are spatial and temporal synchronization. Each sensor must have a space “stamp” representing its geo-coordinates or metric coordinates in a local coordinate system, and the systems processing the data from these sensors must be time synchronized. The accuracy of the spatiotemporal “stamp” is directly related to the accuracy of the digital twin.
Standardization of the scene output using scene graph concepts are used as a foundation of the live digital twin.
Scale is achieved through hierarchy (stitching many live digital twins together) and fusion (grouping individual nodes into aggregate nodes; e.g. 10 individuals in a child scene become 1 group in the parent scene with a property of 10 members)
A key question is how advances in AI can improve the digital twin, and how the digital twin accelerates higher-level AI reasoning from sensor data.
Fierce: What do you expect your audience to gain from your talk?
Watts: For the Sensors Converge audience, there are sensor manufacturers, systems integrators, and solution providers. For the sensor manufacturers, they might be thinking about what is missing in their documentation, software, or design that prevents their sensors from being integrated into a live digital twin.
For systems integrators, they should consider how using a live digital twin could help accelerate their work by separating concerns (sensors feed the digital twin, solutions and business logic work off of the digital twin rather than raw sensor data), and solution providers should think about how the digital twin concept will increase accuracy and improve time to market.
All audience members should leave with the understanding that live digital twins are a foundation for AI agents and AI-based solutions.

(Claude with prompts)
Rob Watts is AI Architect in the Federal and Aerospace Division at Intel’s Edge Computing Group and lead architect and founder of Intel SceneScape. He has a background in applied physics. His keynote address, “Unlock the Next Era of AI in the Physical World with Digital Twins,” can be heard at noon May 6 during Sensors Converge 2026, held at Santa Clara Convention Center, May 5-7. Registration is available online.









