修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

2 条数据
?? 中文(中国)
  • Spatial Interpolation Enables Normative Data Comparison in Gaze-Contingent Microperimetry

    摘要: PURPOSE. To demonstrate methods that enable visual field sensitivities to be compared with normative data without restriction to a fixed test pattern. METHODS. Healthy participants (n ? 60, age 19–50) undertook microperimetry (MAIA-2) using 237 spatially dense locations up to 138 eccentricity. Surfaces were fit to the mean, variance, and 5th percentile sensitivities. Goodness-of-fit was assessed by refitting the surfaces 1000 times to the dataset and comparing estimated and measured sensitivities at 50 randomly excluded locations. A leave-one-out method was used to compare individual data with the 5th percentile surface. We also considered cases with unknown fovea location by adding error sampled from the distribution of relative fovea–optic disc positions to the test locations and comparing shifted data to the fixed surface. RESULTS. Root mean square (RMS) difference between estimated and measured sensitivities were less than 0.5 dB and less than 1.0 dB for the mean and 5th percentile surfaces, respectively. Root mean square differences were greater for the variance surface, median 1.4 dB, range 0.8 to 2.7 dB. Across all participants 3.9% (interquartile range, 1.8–8.9%) of sensitivities fell beneath the 5th percentile surface, close to the expected 5%. Positional error added to the test grid altered the number of locations falling beneath the 5th percentile surface by less than 1.3% in 95% of participants. CONCLUSIONS. Spatial interpolation of normative data enables comparison of sensitivity measurements from varied visual field locations. Conventional indices and probability maps familiar from standard automated perimetry can be produced. These methods may enhance the clinical use of microperimetry, especially in cases of nonfoveal fixation.

    关键词: fundus perimetry,microperimetry,normative database,AMD,gaze-contingent

    更新于2025-09-23 15:23:52

  • [ACM Press the 2018 ACM Symposium - Warsaw, Poland (2018.06.14-2018.06.17)] Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications - ETRA '18 - A gaze-contingent intention decoding engine for human augmentation

    摘要: Humans process high volumes of visual information to perform everyday tasks. In a reaching task, the brain estimates the distance and position of the object of interest, to reach for it. Having a grasp intention in mind, human eye-movements produce specific relevant patterns. Our Gaze-Contingent Intention Decoding Engine uses eye-movement data and gaze-point position to indicate the hidden intention. We detect the object of interest using deep convolution neural networks and estimate its position in a physical space using 3D gaze vectors. Then we trigger the possible actions from an action grammar database to perform an assistive movement of the robotic arm, improving action performance in physically disabled people. This document is a short report to accompany the Gaze-contingent Intention Decoding Engine demonstrator, providing details of the setup used and results obtained.

    关键词: Eye hand interaction,Gaze-contingent systems,Assistive robotics,Eye-movements

    更新于2025-09-23 15:23:52