“We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.”
“Einride isn’t the only outfit trying to ditch the human driver along with the diesel. But where Tesla, Volvo, Daimler, and Uber are looking to free up some space in the cab, Einride has done away with it altogether. Instead, its trucks carry a suite of lidar, radar, and camera sensors, feeding information about the environment to Nvidia’s AI supercomputer”.
“Creating slow-motion footage is all about capturing a large number of frames per second. If you don’t record enough, it becomes choppy and unwatchable as soon as you slow down your video — unless, that is, you use artificial intelligence to imagine the extra frames”.
“We don’t know what happened” – Nvidia Chief Executive Officer Jensen Huang.
«Nvidia has no choice but to take steps in the context of the fear, uncertainty and outrage likely to be stimulated by a robot car killing a human being» – Roger Lanctot.
«Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions».
«Everything the vehicle “sees” with its sensors, all of the images, mapping data, and audio material picked up by its cameras, needs to be processed by giant PCs in order for the vehicle to make split-second decisions. All this processing must be done with multiple levels of redundancy to ensure the highest level of safety. This is why so many self-driving operators prefer SUVs, minivans, and other large wheelbase vehicles: autonomous cars need enormous space in the trunk for their big “brains.”. But Nvidia claims to have shrunk down its GPU, making it an easier fit for production vehicles».
14 décembre 2016 / noflux / Commentaires fermés sur California gives Nvidia the go-ahead to test self-driving cars on public roads
Nvidia announced that it was partnering with Chinese web giant Baidu to build a platform for semiautonomous cars. (Baidu has approval to test autonomous cars in California as well.) Nvidia also built test cars, and was training them in parking lots and private roads prior to receiving this new approval from the California DMV. And this summer, a self-driving race car competition called Roborace announced that it was using the Drive PX2 in its vehicles.California has been a hotbed for autonomous testing, but that status is becoming decreasingly unique.