Nanotextures solve a historic problem
Nanotexturing that prevents scale forming on the inside of pipes can reduce plant maintenance costs significantly
The MIT is working on new sensors that will allow vehicles to see up to three meters deep, regardless of snow or fog.
The development of fully autonomous vehicles has been compared to the landing on the moon. Such are the technological, legal, and even ethical challenges involved in putting an artificial intelligence system behind the wheel. Among all these issues, the need for the car to know where it is at all times and to be able to recognize its environment is one of the most crucial. And a simple snowfall can render useless the most advanced autonomous driving systems. For this reason, a team of researchers from the Laboratory of Computer Science and Artificial Intelligence at MIT (CSAIL) has been working on the development of a system that allows vehicles to map the subsoil. Their approach uses ground-penetrating radar (GPR), which offers advanced detection capabilities. In this case, it is a localized GPR, or LGPR, developed by another MIT laboratory.
The usual solution so far for environmental awareness was to use video cameras and LiDAR systems. The latter are efficient when it comes to making a 3D mapping of the environment, but laser technology is unable to go through, for example, a blanket of snow. Instead, the GPR system can send electromagnetic pulses that reach up to three meters deep and detect the asphalt and the composition of the subsoil, as well as the presence of roots and other elements. CSAIL has leveraged these features to integrate the sensor into a stand-alone vehicle and carry out tests in a closed circuit covered with snow.
This technology project is still in the testing phase and has to overcome some obstacles. For example, the LGPR system used in the tests is 1.5 meters wide and must be installed on the outside of the vehicle to work properly. However, the researchers believe that, in the medium term, their approach could substantially improve the current capabilities of autonomous cars.
Another of MIT's initiatives in the field of autonomous vehicles is the development of a photorealistic simulation engine with infinite possibilities that allows them to learn to react in a virtual environment. The problem with the simulators used to date was that the data, which came from real human trajectories, did not cover all the possibilities. For example, the reaction to an imminent crash or the invasion of the lane by an oncoming vehicle is not very frequent. Now MIT researchers have used a simulator called VISTA that synthesizes an infinite number of trajectories that the vehicle could follow in the real world.
Essentially, it is a matter of collecting video data of human driving. Each frame is translated into a 3D point cloud into which the virtual vehicle is introduced. At each change of trajectory, the engine is able to simulate the modification of perspective and render another photorealistic scene by means of a neural engine. Every time the virtual car crashes, the system returns it to the starting point, which is considered a penalty. As the hours pass, the vehicle travels greater distances without a collision. Subsequently, researchers have managed to transfer this learning to a real autonomous car.
Source: MIT
All fields are mandatory.
Read the most discussed articles
{{CommentsCount}} Comments
Currently no one has commented on the news.
Be the first to leave a comment.
{{firstLevelComment.Name}}
{{firstLevelComment.DaysAgo}} days ago
{{firstLevelComment.Text}}
Answer{{secondLevelComment.Name}}
{{secondLevelComment.DaysAgo}} days ago
{{secondLevelComment.Text}}