- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
Accelerating multi-modal image registration using a supervoxel-based variational framework
摘要: For the successful completion of medical interventional procedures, several concepts, such as daily positioning compensation, dose accumulation or delineation propagation, rely on establishing a spatial coherence between planning images and images acquired at different time instants over the course of the therapy. To meet this need, image-based motion estimation and compensation relies on fast, automatic, accurate and precise registration algorithms. However, image registration quickly becomes a challenging and computationally intensive task, especially when multiple imaging modalities are involved. In the current study, a novel framework is introduced to reduce the computational overhead of variational registration methods. The proposed framework selects representative voxels of the registration process, based on a supervoxel algorithm. Costly calculations are hereby restrained to a subset of voxels, leading to a less expensive spatial regularized interpolation process. The novel framework is tested in conjunction with the recently proposed EVolution multi-modal registration method. This results in an algorithm requiring a low number of input parameters, is easily parallelizable and provides an elastic voxel-wise deformation with a subvoxel accuracy. The performance of the proposed accelerated registration method is evaluated on cross-contrast abdominal T1/T2 MR-scans undergoing a known deformation and annotated CT-images of the lung. We also analyze the ability of the method to capture slow physiological drifts during MR-guided high intensity focused ultrasound therapies and to perform multi-modal CT/MR registration in the abdomen. Results have shown that computation time can be reduced by 75% on the same hardware with no negative impact on the accuracy.
关键词: multi-modal registration,non-rigid registration,supervoxel,variational method
更新于2025-09-10 09:29:36
-
Vitamin E-inspired multi-scale imaging agent
摘要: The production and use of multi-modal imaging agents is on the rise. The vast majority of these imaging agents are limited to a single length scale for the agent (e.g. tissues only), which is typically at the organ or tissue scale. This work explores the synthesis of such an imaging agent and discusses the applications of our vitamin E-inspired multi-modal and multi-length scale imaging agents TB-Toc ((S,E)-5,5-difluoro-7-(2-(5-((6-hydroxy-2,5,7,8-tetramethylchroman-2-yl) methyl) thiophen-2-yl) vinyl)-9-methyl-5H-dipyrrolo-[1,2-c:2’,1’-f][1,3,2] diazaborinin-4-ium-5-uide). We investigate the toxicity of TB-Toc along with the starting materials and lipid based delivery vehicle in mouse myoblasts and fibroblasts. Further we investigate the uptake of TB-Toc delivered to cultured cells in both solvent and liposomes. TB-Toc has low toxicity, and no change in cell viability was observed up to concentrations of 10 mM. TB-Toc shows time-dependent cellular uptake that is complete in about 30 min. This work is the first step in demonstrating our vitamin E derivatives are viable multi-modal and length scale diagnostic tools.
关键词: Liposomal delivery,Multi-modal,Multi-scale imaging,Vitamin E
更新于2025-09-09 09:28:46
-
[IEEE 2018 24th International Conference on Pattern Recognition (ICPR) - Beijing, China (2018.8.20-2018.8.24)] 2018 24th International Conference on Pattern Recognition (ICPR) - Cross Modal Multiscale Fusion Net for Real-time RGB-D Detection
摘要: This paper presents a novel multi-modal CNN architecture for object detection by exploiting complementary input cues in addition to sole color information. Our one-stage architecture fuses the multiscale mid-level features from two individual feature extractor, so that our end-to-end net can accept crossmodal streams to obtain high-precision detection results. In comparison to other crossmodal fusion neural networks, our solution successfully reduces runtime to meet the real-time requirement with still high-level accuracy. Experimental evaluation on challenging NYUD2 dataset shows that our network achieves 49.1% mAP, and processes images in real-time at 35.3 frames per second on one single Nvidia GTX 1080 GPU. Compared to baseline one stage network SSD on RGB images which gets 39.2% mAP, our method has great accuracy improvement.
关键词: multi-modal CNN,fusion network,object detection,RGB-D,real-time
更新于2025-09-09 09:28:46
-
[IEEE 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Athens (2018.3.19-2018.3.23)] 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) - Improving Motion-based Activity Recognition with Ego-centric Vision
摘要: Human activity recognition using wearable computers is an active area of research in pervasive computing. Existing works mainly focus on the recognition of physical activities or so called activities of daily living by relying on inertial or interaction sensors. A main issue of those studies is that they often focus on critical applications like health care but without any evidence that the monitored activities really took place. In our work, we aim to overcome this limitation and present a multi-modal egocentric-based activity recognition approach which is able to recognize the critical objects. As it is unfeasible to expect always a high quality camera view, we enrich the vision features with inertial sensor data that represents the users’ arm movement. This enables us to compensate the weaknesses of the respective sensors. We present first results of our ongoing work on this topic.
关键词: interaction sensors,Human activity recognition,pervasive computing,wearable computers,inertial sensors,multi-modal egocentric-based activity recognition
更新于2025-09-04 15:30:14