修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

44 条数据
?? 中文(中国)
  • [IEEE 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Chicago, IL, USA (2019.6.16-2019.6.21)] 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Spatially Resolved Material Quality Prediction Via Constrained Deep Learning

    摘要: Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6–16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with ~2 mm pixels provided higher detection performance than those with ~4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.

    关键词: PET/CT reconstruction,PET/CT,image reconstruction,Image quality assessment

    更新于2025-09-19 17:13:59

  • [IEEE 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Chicago, IL, USA (2019.6.16-2019.6.21)] 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Photovoltaic Inverter Momentary Cessation: Recovery Process is Key

    摘要: Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6–16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with ~2 mm pixels provided higher detection performance than those with ~4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.

    关键词: Image quality assessment,PET/CT reconstruction,PET/CT,image reconstruction

    更新于2025-09-19 17:13:59

  • [IEEE 2019 16th International Conference on the European Energy Market (EEM) - Ljubljana, Slovenia (2019.9.18-2019.9.20)] 2019 16th International Conference on the European Energy Market (EEM) - Shapley-Value-Based Distribution of the Costs of Solar Photovoltaic Plant Grid Connection

    摘要: Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6–16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with ~2 mm pixels provided higher detection performance than those with ~4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.

    关键词: PET/CT reconstruction,PET/CT,image reconstruction,Image quality assessment

    更新于2025-09-16 10:30:52

  • Generating Image Distortion Maps Using Convolutional Autoencoders with Application to No Reference Image Quality Assessment

    摘要: We present two contributions in this work: (i) a reference-free image distortion map generating algorithm for spatially localizing distortions in a natural scene, and (ii) no reference image quality assessment (NRIQA) algorithms derived from the generated distortion map. We use a convolutional autoencoder (CAE) for distortion map generation. We rely on distortion maps generated by the SSIM image quality assessment (IQA) algorithm as the “ground truth” for training the CAE. We train the CAE on a synthetically generated dataset composed of pristine images and their distorted versions. Specifically, the dataset was created by applying standard distortions such as JPEG compression, JP2K compression, Additive White Gaussian Noise (AWGN) and blur to the pristine images. SSIM maps are then generated on a per distorted image basis for each of the distorted images in the dataset and are in turn used for training the CAE. We first qualitatively demonstrate the robustness of the proposed distortion map generation algorithm over several images with both traditional and authentic distortions. We also demonstrate the distortion map’s effectiveness quantitatively on both standard distortions and authentic distortions by deriving three different NRIQA algorithms. We show that these NRIQA algorithms deliver competitive performance over traditional databases like LIVE Phase II, CSIQ, TID 2013, LIVE MD and MDID 2013, and databases with authentic distortions like LIVE Wild and KonIQ-10K. In summary, the proposed method generates high quality distortion map that are used to design robust NRIQA algorithms. Further, the CAE based distortion maps generation method can easily be modified to work with other ground truth distortion maps.

    关键词: Convolutional neural network,no reference image quality assessment (IQA),human visual system (HVS),autoencoders

    更新于2025-09-11 14:15:04

  • [IEEE 2018 International Symposium in Sensing and Instrumentation in IoT Era (ISSI) - Shanghai, China (2018.9.6-2018.9.7)] 2018 International Symposium in Sensing and Instrumentation in IoT Era (ISSI) - No-reference image quality assessment based on dualchannel convolutional neural network

    摘要: In recent years, convolutional neural networks have achieved more outstanding results and been widely used in the field of image quality assessment compared with the traditional handcraft method. This paper presents a no-reference image quality assessment method based on dual-channel convolutional neural network. The raw image is labeled by visual information fidelity and divided into multiple patches as input. After that, feature extraction is performed by two network channels with different pooling layers. The features are linearly stitched and sent to the fully connected layer. The experimental results on the LIVE database and the TID2008 database show that our model has the state-of-the-art performance and obtain a better consistency with human subjective assessment.

    关键词: convolutional neural network,image quality assessment,dual-channel

    更新于2025-09-10 09:29:36

  • A New Image Quality Metric Using Compressive Sensing And A Filter Set Consisting of Derivative And Gabor Filters

    摘要: This paper proposes an image quality metric (IQM) using compressive sensing (CS) and a filter set consisting of derivative and Gabor filters. In this paper, compressive sensing that is used for acquiring a sparse or compressible signal with a small number of measurements is used for measuring the quality between the reference and distorted images. However, an image is generally neither sparse nor compressible, so a CS technique cannot be directly used for image quality assessment. Thus, for converting an image into a sparse or compressible signal, the image is convolved with filters such as the gradient, Laplacian of Gaussian, and Gabor filters, since the filter outputs are generally compressible. A small number of measurements obtained by a CS technique are used for evaluating the image quality. Experimental results with various test images show the effectiveness of the proposed algorithm in terms of the Pearson correlation coefficient (CC), root mean squared error, Spearman rank order CC, and Kendall CC.

    关键词: Difference Mean Opinion Score,Gabor Filter,Image Quality Assessment,Compressive Sensing,Derivative Filters

    更新于2025-09-10 09:29:36

  • Contextual Information Based Quality Assessment for Contrast-Changed Images

    摘要: In this paper, we propose the objective metric which can precisely predict the perceptual quality of contrast-changed images using inter-pixel contextual information. The metric consists of two parts. One is a 2D histogram-based contrast quality measure which utilizes the distribution of the gray-level differences between adjacent pixels. We design the desired 2D histogram considering the characteristic of an adequately high contrast image, and predict contrast quality by comparing the desired 2D histogram with 2D histograms of an original image and a contrast-changed image. The other is a spatial entropy based one which uses the information of spatial location distribution of gray-levels. A comparison is carried out with many IQA metrics on five contrast related databases. Experimental results show that the proposed metric provides a more accurate prediction of human perception of contrast change than other metrics.

    关键词: 2D histogram and spatial entropy,Image quality assessment (IQA),contrast-changed images

    更新于2025-09-10 09:29:36

  • Stereoscopic Image Quality Assessment by Deep Convolutional Neural Network

    摘要: In this paper, we propose a no-reference (NR) quality assessment method for stereoscopic images by deep convolutional neural network (DCNN). Inspired by the internal generative mechanism (IGM) in the human brain, which shows that the brain first analyzes the perceptual information and then extract effective visual information. Meanwhile, in order to simulate the inner interaction process in the human visual system (HVS) when perceiving the visual quality of stereoscopic images, we construct a two-channel DCNN to evaluate the visual quality of stereoscopic images. First, we design a Siamese Network to extract high-level semantic features of left- and right-view images for simulating the process of information extraction in the brain. Second, to imitate the information interaction process in the HVS, we combine the high-level features of left- and right-view images by convolutional operations. Finally, the information after interactive processing is used to estimate the visual quality of stereoscopic image. Experimental results show that the proposed method can estimate the visual quality of stereoscopic images accurately, which also demonstrate the effectiveness of the proposed two-channel convolutional neural network in simulating the perception mechanism in the HVS.

    关键词: convolutional neural network,Image quality assessment,no reference,stereoscopic images

    更新于2025-09-10 09:29:36

  • Unified No-Reference Quality Assessment of Singly and Multiply Distorted Stereoscopic Images

    摘要: A challenging problem in no-reference quality assessment of multiply distorted stereoscopic images (MDSIs) is to simulate the monocular and binocular visual properties under a mixed type of distortions. Due to the joint effects of multiple distortions in MDSIs, the underlying monocular and binocular visual mechanisms have different manifestations with those of singly distorted stereoscopic images (SDSIs). This paper presents a unified no-reference quality evaluator for SDSIs and MDSIs by learning monocular and binocular local visual primitives (MB-LVPs). The main idea is to learn MB-LVPs to characterize the local receptive field properties of the visual cortex in response to SDSIs and MDSIs. Furthermore, we also consider that the learning of primitives should be performed in a task-driven manner. For this, two penalty terms including reconstruction error and quality inconsistency are jointly minimized within a supervised dictionary learning framework, generating a set of quality-oriented MB-LVPs for each single and multiple distortion modality. Given an input stereoscopic image, feature encoding is performed using the learned MB-LVPs as codebooks, resulting in the corresponding monocular and binocular responses. Finally, responses across all the modalities are fused with probabilistic weights which are determined by the modality-specific sparse reconstruction errors, yielding the final monocular and binocular features for quality regression. The superiority of our method has been verified on several SDSI and MDSI databases.

    关键词: multiply distorted,singly distorted,receptive field,monocular and binocular vision,stereoscopic image,local visual primitive,No-reference image quality assessment

    更新于2025-09-09 09:28:46

  • Automatic Assessment of Full Left Ventricular Coverage in Cardiac Cine Magnetic Resonance Imaging with Fisher Discriminative 3D CNN

    摘要: Cardiac magnetic resonance (CMR) images play a growing role in the diagnostic imaging of cardiovascular diseases. Full coverage of the left ventricle (LV), from base to apex, is a basic criterion for CMR image quality and necessary for accurate measurement of cardiac volume and functional assessment. Incomplete coverage of the LV is identified through visual inspection, which is time-consuming and usually done retrospectively in the assessment of large imaging cohorts. This paper proposes a novel automatic method for determining LV coverage from CMR images by using Fisher-discriminative three-dimensional (FD3D) convolutional neural networks (CNNs). In contrast to our previous method employing 2D CNNs, this approach utilizes spatial contextual information in CMR volumes, extracts more representative high-level features and enhances the discriminative capacity of the baseline 2D CNN learning framework, thus achieving superior detection accuracy. A two-stage framework is proposed to identify missing basal and apical slices in measurements of CMR volume. First, the FD3D CNN extracts high-level features from the CMR stacks. These image representations are then used to detect the missing basal and apical slices. Compared to the traditional 3D CNN strategy, the proposed FD3D CNN minimizes within-class scatter and maximizes between-class scatter. We performed extensive experiments to validate the proposed method on more than 5,000 independent volumetric CMR scans from the UK Biobank study, achieving low error rates for missing basal/apical slice detection (4.9%/4.6%). The proposed method can also be adopted for assessing LV coverage for other types of CMR image data.

    关键词: image-quality assessment,LV coverage,Fisher discriminant criterion,3D convolutional neural network,population image analysis

    更新于2025-09-09 09:28:46