研究目的
To develop a touchless interaction system for observing medical images that addresses the problem of missing features in 3D models due to occlusion, using gesture-based techniques and focus-and-context visualization.
研究成果
The proposed touchless system effectively enables observation of medical images with positive user experience, but has limitations in feature classification accuracy and gesture recognition stability. Future work should integrate neural networks or vascular nerve reconstruction for improved accuracy, add spatial tips and AR for better feedback, and enhance gesture recognition algorithms.
研究不足
The system cannot accurately classify features due to voxel-based classification errors. Gesture recognition is affected by environmental light and hand feature point overlaps, leading to instability. User sense of space varies, causing confusion in operation. Feedback diversity is limited, and complex operations are avoided, which may reduce functionality.
1:Experimental Design and Method Selection:
The study designed a system using volume rendering with ray casting for 3D model generation, combined with focus-and-context techniques. Gesture recognition via Leap Motion was used for touchless interaction. Methods included region growing and size-based transfer functions for feature extraction, and cylinder/cone view penetration for observing hidden features.
2:Sample Selection and Data Sources:
Medical imaging datasets from CT or MRI scans were used, sourced from the University of Utah, Viatronix Inc., Tiani Medgraph, Philips Research, and The Institute for Neuroradiology, Frankfurt.
3:List of Experimental Equipment and Materials:
A notebook with Intel Core i7-4720 CPU, 8GB RAM, NVIDIA GTX950M GPU; Leap Motion Controller (SDK 3.2.0); Unity module for system operation.
4:0); Unity module for system operation.
Experimental Procedures and Operational Workflow:
4. Experimental Procedures and Operational Workflow: Users loaded 2D image datasets, performed volume rendering, used 3D section cutting and 3-axes cross-section synchronization tools for ROI selection, applied feature extraction methods, and utilized view penetration functions. Gestures were recognized for interaction, and experiments measured task completion times and user satisfaction.
5:Data Analysis Methods:
Statistical analysis of completion times from experiments, with results presented in bar graphs. User feedback was collected via questionnaires and analyzed for mean scores on system design aspects.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容