“Imagine a game in which you could have intelligent, unscripted and dynamic conversations with non-playable characters (NPCs) with persistent personalities that evolve over time and accurate facial animations and expressions, all in your native tongue.”
“When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds. Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly — making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering.”
“Harness the power of AI to quickly turn simple brushstrokes into realistic landscape images for backgrounds, concept exploration, or creative inspiration. 🖌️ The NVIDIA Canvas app lets you create as quickly as you can imagine.”
via NVIDIA Studio
“We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.”
via Tero Karras FI (YouTube)
“Powered by the Quadro RTX 6000, this demo shows off production-quality rendering and cinematic frame rates, enabling users to interact with scene elements in real time”.
via NVIDIA (YouTube)
“Thanks to the technology supported by the architecture, including the new DLAA anti-aliasing tech, Turing can improve rendering times of real-time ray tracing over Pascal “by a factor of six.””
“Einride isn’t the only outfit trying to ditch the human driver along with the diesel. But where Tesla, Volvo, Daimler, and Uber are looking to free up some space in the cab, Einride has done away with it altogether. Instead, its trucks carry a suite of lidar, radar, and camera sensors, feeding information about the environment to Nvidia’s AI supercomputer”.
“Creating slow-motion footage is all about capturing a large number of frames per second. If you don’t record enough, it becomes choppy and unwatchable as soon as you slow down your video — unless, that is, you use artificial intelligence to imagine the extra frames”.
“We don’t know what happened” – Nvidia Chief Executive Officer Jensen Huang.
«Nvidia has no choice but to take steps in the context of the fear, uncertainty and outrage likely to be stimulated by a robot car killing a human being» – Roger Lanctot.
«Each of these images took about 18 days for the computers to generate, before reaching a point that the system found them believable»