The metaverse can be seen as the next step of the web, providing a virtual, persistent world where users can socialise, work, and play with actions that could have real-world effects.
Facebook recently changed its company name to Meta to reflect its new focus on building a metaverse. During Meta’s recent Connect conference, CEO Mark Zuckerberg was upfront in saying that it will take many years to fully realise the metaverse vision.
Nvidia is positioned to be a key player in making the metaverse a reality as it spans several fields that will need to converge to make it a reality including AI, digital twins, data centres, and graphics processing.
Omniverse is Nvidia’s collaborative solution for building digital twins of real-world objects – or even entire premises such as factories – and is often described as a “metaverse for engineers”. The solution is already used by more than 700 companies and 70,000 individual creators.
During this year’s GTC 2021 keynote, Nvidia CEO Jensen Huang announced several Omniverse updates that shows the company is getting serious about the metaverse (although not enough to rename itself Metavidia… yet.)
First off, Omniverse is moving from beta to general availability. Companies and creators wanting to get started with Omniverse can now have confidence that Nvidia believes it’s production-ready.
“With NVIDIA Omniverse, we can iterate rapidly and recreate the realism we get when photographing physical scale models to explore design, iterate and amplify our clients’ voices during the design process,” said Hilda Espinal, chief technology officer and senior vice president at CannonDesign.
One of the key new features is the Omniverse Replicator, which aims to make it simpler to train deep neural networks.
“Omniverse Replicator allows us to create diverse, massive, accurate datasets to build high-quality, high-performing, and safe datasets, which is essential for AI,” said Rev Lebaredian, VP of simulation technology and Omniverse engineering at NVIDIA.
Nvidia demonstrated the capabilities of Omniverse Replicator later in its presentation with a data-generation engine that it’s built (and released) called DRIVE Sim that hosts digital twins of autonomous vehicles:
The other engine released by Nvidia is Isaac Sim, a virtual world for digital twins of manipulation robots.
“While we have built two domain-specific data-generation engines ourselves, we can imagine many companies building their own with Omniverse Replicator,” adds Lebaredian.
Other announcements for Omniverse include:
- NVIDIA CloudXR has now been integrated into Omniverse Kit which allows users to interactively stream Omniverse experiences to their mobile AR and VR devices.
- Omniverse VR introduces the world’s first full-image, real-time ray-traced VR.
- Omniverse XR Remote provides AR capabilities and virtual cameras, enabling designers to view their assets fully ray traced through iOS and Android devices.
- Omniverse Farm lets teams use multiple workstations or servers together to power jobs like rendering, synthetic data generation or file conversion.
- Omniverse Showroom lets anyone play with tech demos showcasing the platform’s real-time physics and rendering technologies.
A major feature that deserves its own section is Omniverse Avatar. The name is quite self-explanatory, but brings together Nvidia’s aforementioned wide expertise in areas like AI and simulations to generate interactive avatars.
The 3D avatars are ray-traced and can understand natural language to converse on a range of subjects. While long-term they could be used to create metaverse avatars, their use for customer service interactions will be a much earlier use case.
“The dawn of intelligent virtual assistants has arrived,” commented Jensen Huang, founder and CEO of NVIDIA. “Omniverse Avatar combines NVIDIA’s foundational graphics, simulation, and AI technologies to make some of the most complex real-time applications ever created.”
“The use cases of collaborative robots and virtual assistants are incredible and far reaching.”
Nvidia itself is working on a customer support platform currently called Project Tokkio. In a demonstration, Huang showed colleagues engaging in a real-time conversation with an avatar crafted as a toy replica of himself.
Previous attempts to create human-like avatars have prompted users to feel uneasy due to the “uncanny valley” effect. The use of a more cartoonish aesthetic helps to counter this.
The demonstration showed the capabilities of Omniverse Avatar to converse across a range of topics, including biology and climate science. In another demo, a customer service kiosk was able to take the food orders of two customers.
While demonstrating the DRIVE Concierge platform, an avatar on the centre dashboard assisted the driver with selecting the best driving mode to reach his destination on time and followed his request to alert when the vehicle’s range dropped below 100 miles.
All of the demonstrations were powered by Nvidia’s AI software.
The speech recognition of the avatars is based on Nvidia Riva, recommendations by Nvidia Merlin, perception by Nvidia Metropolis, and animations powered by Nvidia Video2Face and Audio2Face.
Natural language understanding for the demonstrated avatars was based on the Megatron 530B pre-trained model, which is currently the world’s largest customisable language model and requires little or no training to achieve impressive capabilities including answering questions, completing sentences, summarising complex stories, translations, and other domains that it’s not specifically trained for.
Overall, it’s a great showing from Nvidia which shows how serious the company is about playing a key role in making the metaverse a reality—even if it’s not ready to change its name to reflect that yet.
(Image Credit: Nvidia)
Want to learn more about the IoT from leaders in the space? Check out IoT Tech Expo Europe taking place on 23-34 November 2021.