I worked as a Research Assistant with Prof. Nicola Bezzo to develop autonomous drones that can map houses and entire neighborhoods. The project is funded by CoStar. My work involves designing the drone platform with the required sensors and hardware, Path planning, and Computer Vision.
Different platforms that were used as part of the project
A Matlab app was made to create plans for Nadir and Oblique paths to take pictures of the houses. The app uses information about the house like GPS coordinates, house footprint, etc to create waypoints. A path is then generated using the Traveling salesman algorithm to minimize the energy.
For the Matrice, the front stereo cameras were used to generate a point cloud which can help improve the safety of the maneuver. The trajectory produced by the above Matlab app is also tested on the Matrice.
Many buildings are parameterizable by planar forms. We want to use plane detection to help with localization, finding the boundaries of houses and relative positions, and helping direct the drone to better frame pictures. We are using PlaneRCNN to infer the planes from the RGB images.
Obstacle avoidance: The drone, while performing navigation, might have planned its trajectory based on static obstacles that are known beforehand, but this alone does not guarantee a safe flight as there could be dynamic obstacles or unknown obstructions that the drone may collide with. It is important to do real-time planning and navigate around such obstacles. We are using a local planner based on 3D Vector Field Histograms that uses the occupancy grid generated from the depth camera
3D Reconstruction: When it comes to inspection, one of the important goals is to have a 3D model of the explored environment. This helps in gaining insightful information about the shapes, dimensions, etc that provide a realistic view of the location. For this purpose, we are using the point cloud data from the depth camera and using Rtabmap to perform the mapping.