- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
Fruit detection in an apple orchard using a mobile terrestrial laser scanner
摘要: The development of reliable fruit detection and localization systems provides an opportunity to improve the crop value and management by limiting fruit spoilage and optimised harvesting practices. Most proposed systems for fruit detection are based on RGB cameras and thus are affected by intrinsic constraints, such as variable lighting conditions. This work presents a new technique that uses a mobile terrestrial laser scanner (MTLS) to detect and localise Fuji apples. An experimental test focused on Fuji apple trees (Malus domestica Borkh. cv. Fuji) was carried out. A 3D point cloud of the scene was generated using an MTLS composed of a Velodyne VLP-16 LiDAR sensor synchronised with an RTK-GNSS satellite navigation receiver. A re?ectance analysis of tree elements was performed, obtaining mean apparent re?ectance values of 28.9%, 29.1%, and 44.3% for leaves, branches and trunks, and apples, respectively. These results suggest that the apparent re?ectance parameter (at 905 nm wavelength) can be useful to detect apples. For that purpose, a four-step fruit detection algorithm was developed. By applying this algorithm, a localization success of 87.5%, an identi?cation success of 82.4%, and an F1-score of 0.858 were obtained in relation to the total amount of fruits. These detection rates are similar to those obtained by RGB-based systems, but with the additional advantages of providing direct 3D fruit location information, which is not affected by sunlight variations. From the experimental results, it can be concluded that LiDAR-based technology and, particularly, its re?ectance information, has potential for remote apple detection and 3D location.
关键词: Mobile terrestrial laser scanning,Agricultural robotics,Fruit detection,LiDAR
更新于2025-09-12 10:27:22
-
In-field high throughput grapevine phenotyping with a consumer-grade depth camera
摘要: Plant phenotyping, that is, the quantitative assessment of plant traits including growth, morphology, physiology, and yield, is a critical aspect towards efficient and effective crop management. Currently, plant phenotyping is a manually intensive and time consuming process, which involves human operators making measurements in the field, based on visual estimates or using hand-held devices. In this work, methods for automated grapevine phenotyping are developed, aiming to canopy volume estimation and bunch detection and counting. It is demonstrated that both measurements can be effectively performed in the field using a consumer-grade depth camera mounted on-board an agricultural vehicle. First, a dense 3D map of the grapevine row, augmented with its color appearance, is generated, based on infrared stereo reconstruction. Then, different computational geometry methods are applied and evaluated for plant per plant volume estimation. The proposed methods are validated through field tests performed in a commercial vineyard in Switzerland. It is shown that different automatic methods lead to different canopy volume estimates meaning that new standard methods and procedures need to be defined and established. Four deep learning frameworks, namely the AlexNet, the VGG16, the VGG19 and the GoogLeNet, are also implemented and compared to segment visual images acquired by the RGB-D sensor into multiple classes and recognize grape bunches. Field tests are presented showing that, despite the poor quality of the input images, the proposed methods are able to correctly detect fruits, with a maximum accuracy of 91.52%, obtained by the VGG19 deep neural network.
关键词: Grapevine canopy volume estimation,RGB-D sensing,Agricultural robotics,In-field phenotyping,Deep learning-based grape bunch detection
更新于2025-09-10 09:29:36
-
Leaf Area Estimation of Reconstructed Maize Plants Using a Time-of-Flight Camera Based on Different Scan Directions
摘要: The leaf area is an important plant parameter for plant status and crop yield. In this paper, a low-cost time-of-flight camera, the Kinect v2, was mounted on a robotic platform to acquire 3-D data of maize plants in a greenhouse. The robotic platform drove through the maize rows and acquired 3-D images that were later registered and stitched. Three different maize row reconstruction approaches were compared: reconstruct a crop row by merging point clouds generated from both sides of the row in both directions, merging point clouds scanned just from one side, and merging point clouds scanned from opposite directions of the row. The resulted point cloud was subsampled and rasterized, the normals were computed and re-oriented with a Fast Marching algorithm. The Poisson surface reconstruction was applied to the point cloud, and new vertices and faces generated by the algorithm were removed. The results showed that the approach of aligning and merging four point clouds per row and two point clouds scanned from the same side generated very similar average mean absolute percentage error of 8.8% and 7.8%, respectively. The worst error resulted from the two point clouds scanned from both sides in opposite directions with 32.3%.
关键词: crop characterization,precision farming,3-D sensors,agricultural robotics,plant phenotyping
更新于2025-09-09 09:28:46