Digital Twin – Industrial navigation Avatars for cooperative multimachine automation
For future automation cooperatively machine missions, working interactively at close ranges, the use of perception technologies to add spatial AI can prove both unreliable and dangerous from a safety perspective.
In the Virtual/ Augmented Reality computing world, a “Digital Twin” (DT)/ “Avatar” is used as a 3D graphical representation of either the user’s character/ persona, and movable machines or objects for very visually accurate representation for games and/ or virtual worlds.
The 1999 movie “The Matrix” depicted a dystopian future where humanity is unknowingly trapped inside a simulated “Matrix” reality, the created by future intelligent machines to distract humans, from their real dire situation. While 2009 the movie Avatar, depicted future human scientists using Na’vi avatar hybrids, operated by genetically matched humans for exploring moon Pandora’s poisonous biosphere, while mining precious unobtainium.
It was thought these Hollywood imaginary DT concepts would be the inspirational catalyst for future navigation and spatial sciences industry rapid adoption. Real time GNSS /IMU sensor navigation data directly applied to multi-machine DTs, all immersed in 3D synthetic environments. Delivering instantaneous VR / AR for us humans, while concurrently sourcing highspeed 3D digital spatial AI for machine’s onboard mission computers. Today, this is still largely a 3D spatial sciences industry pipe dream.
Why is Digital Twin important?
For both safety and optimised operational efficiencies, future participating in industrial cooperative automation projects will require very precise inter-machine 3D DT surface updates very quickly.
Many future industrial autonomous machinery projects require multiple machines working both cooperatively, with close proximities. Frequently these machines are complex 3D objects with the ability to change shape and orientations through rotational, jointed, and/ or articulated movable parts. For example, airborne refuelling arms, mining machines or grain harvester discharge chutes, etc. For these types of cooperative machine projects, using the simple machine centroid approach (latitude/longitude/height) is not adequate to meet demanding safety requirements. Instead, real time knowledge and spatial awareness of all DT solid body 3D surface volumes, generated in cooperative distributed 3D navigation systems, will be the essential choice. Delivering high rate and accurate spatial positional knowledge of each machine’s solid body volume surfaces in 3D relationships.
Is it possible, DT machine navigation immersed and visualised in a “virtual world”? Yes, interactive technologies via online 3D gaming/ entertainment industries have been cutting edge for years. Using optimised low-cost computational platforms (Xbox™, PlayStation™ and PCs) delivering distributed multiple object and terrain information, immersed in interactive “3D “worlds” at very high user update rates. For equivalent industrial applications, new 3D spatial processors perform the Xbox™ functions, machine mission computers become the “Players”, and navigation sensors (GNSS and IMUs) replace the “Joystick Controllers”
Immersion in a multi-machine DT cooperative 3D virtual environment, massive number of precision real time 3D surface interactions can be computed in real time. Rather than for entertainment, determination the complex inter-machine (shape and orientation changes) 3D spatial relationships. These computed 3D spatial inter-surface interaction/ relationships data sets are then mission automation controller ready. Essentially, 3D spatial- AI for those, airborne refuelling arms, UAV landing on moving platforms, (rolling/ pitching ships decks), mining machines or grain harvesters with discharge chutes, etc. applications. Computational power and network communication capabilities then become the only limiting factors for the complex machine the synthetic environments.
Most modern automation strategies adopt an “individualistic determination of self”, that being, vehicle 3D environmental awareness through fused point-based navigation (GNSS lat/ long/height) combined with perception sensor technologies (3D Lidar, Radar, Cameras and Ultrasonics) as they move. Continuously scanning and mapping in real time primarily for peripheral object collision avoidance purposes. Typically, the older perception data is quickly discarded. Vehicle 3D spatial volume occupation and precise external surface spatial positioning are not important considerations. For single platform missions, precision navigation and perception sensor fusion this is the popular choice adopted by most companies.
Navigate the surfaces!
Our cities, industrial and rural areas, mines, airports etc. are all being precision 3D scanned to typical resolutions of +/- 2cm creating massive high fidelity surface data sets. To similar accuracies, precision machine 3D surface models are also now commonplace via Laser Scanning (older machines), or, derived from detailed manufacturers 3D CAD designs. Tens of thousands of surfaces precisely defining each machine’s external 3D volume extremities, such as, handrails, light poles, etc.
For future automation cooperatively machine missions, working interactively at close ranges, the use of perception technologies to add spatial AI can prove both unreliable and dangerous from a safety perspective. In these missions, the underlying collective DT navigation/ spatial AI are guided by significantly different safety considerations, these being;
• Each machine needs to precisely compute unique external body 3D surface positions (possibly 20K+/ machine) in real time at very high update rates, typically 20/sec.
• Common knowledge of 3D volumes occupied by each machine needs shared/ accessible across all participating platforms quickly via highspeed network communications.
• Based on prior 3D static terrain knowledge, and adjacent machine navigation updates, each machine needs to perform it’s own on-board safety spatial AI assessment. Any discrepancies uncertainties, or, impending disasters, then rapidly implement remedial measures.
Todays technologies merged for DT spatial AI future
All the technologies required to deliver the future DT spatial-AI world exist today, they simply need to be merged differently. Precision GNSS and geodesy are long proven reliable navigation technologies. International gaming industry has clearly been delivering 3D high fidelity immersive spatial-AI environments for decades. Field Programmable Gate Arrays FPGA and ASIC electronic are devices that are delivering the high extreme spatial processing used by 3D gaming. Major producers of 3D CAD/CAM software packages with their API interfaces can be easily used for high fidelity simultaneous multi-machine 3D motion scenario generators. While existing affordable GNSS simulation equipment can be used for final mission safety and operational verification work via batch processing.
For the brave, the wonderful world of DT spatial-AI awaits future cooperative machine automation projects. High speed 3D spatial AI for the on-board mission computers. While for the humans-inthe- loop, high fidelity synthetic VR/ AR visualisations delivered anywhere/ anytime across the globe. These benefits include both synthetic 3D Real Time and Recorded Playback, selectable spatial layer visualisation, automatic virtual 3D CCD-TV camera object following and critical event displays, to name a few.
Digital Twin/ Avatar technologies are the future.