- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
[IEEE 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) - Guangzhou, China (2018.10.8-2018.10.12)] 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) - An Efficient Recognition Method for Incomplete Iris Image Based on CNN Model
摘要: The iris of the eye is a research hot spot in the field of biometric identification because of its uniqueness, non-contact and bioactivity. The incompleteness of the iris caused by the acquisition process has brought great uncertainty to the subsequent iris region segmentation and iris code matching, thereby reducing the efficiency of iris recognition. This paper describes a deep convolution neural network model with adaptive incomplete iris preprocessing mechanism. Based on the normalization of the iris image, the incomplete iris preprocessing mechanism adopts the method of making the inner circle or the outer circle. The iris region can be segmented by the line fitting and the circle fitting method for extracting as many iris features as possible. The deep convolution neural network then uses pixel coding of Irregular iris regions to complete the iris pattern classification. The model fully utilizes the characteristics of deep learning, local feature characterization and weight sharing, and realizes the problem of using large sample to compensate the incomplete feature of local feature. The experimental results show that this method has significant accuracy improvement compared with the traditional algorithms.
关键词: iris recognition,convolution neural network,iris image normalization,algorithm
更新于2025-09-23 15:23:52
-
Biometric iris recognition using radial basis function neural network
摘要: The consistent and efficient method for the identification of biometrics is the iris recognition in view of the fact that it has richness in texture information. A good number of features performed in the past are built on handcrafted features. The proposed method is based on the feed-forward architecture and uses k-means clustering algorithm for the iris patterns classification. In this paper, segmentation of iris is performed using the circular Hough transform that realizes the iris boundaries in the eye and isolates the region of iris with no eyelashes and other constrictions. Moreover, Daugman's rubber sheet model is used to transform the resultant iris portion into polar coordinates in the process of normalization. A unique iris code is generated by log-Gabor filter to extract the features. The classification is achieved using neural network structures, the feed-forward neural network and the radial basis function neural network. The experiments have been conducted using the Chinese Academy of Sciences Institute of Automation (CASIA) iris database. The proposed system decreases computation time, size of the database and increases the recognition accuracy as compared to the existing algorithms.
关键词: Feed-forward neural network (FNN),Iris segmentation,Normalization,Biometrics,Radial basis function neural network (RBFNN),Iris recognition
更新于2025-09-23 15:23:52
-
Multifocus image fusion scheme based on discrete cosine transform and spatial frequency
摘要: Multifocus images are different images of the same scene captured with different focus in the cameras. These images when considered individually may not give good quality. Hence to obtain a good quality image, this work proposes an algorithm for fusing multifocus images using Discrete Cosine Transform and spatial frequency. The proposed algorithm works for fusing any number of images. The second step calculates the average and maximum of all the source images and reduces the source images to be processed as two. Then Discrete Cosine Transform (DCT) is applied over the two input images. Min-Max normalization is done on the DCT coefficients and fusion is done using spatial frequency. Inclusion of the second step of the proposed algorithm in some existing algorithms such as Stationary Wavelet Transform, Principal Component Analysis and spatial fusion improves the performance. The metrics used for evaluation proves that the proposed algorithm gives better results than the other algorithms using DCT and state of the art techniques.
关键词: DCT,Min-Max normalization,Image fusion,Spatial frequency
更新于2025-09-23 15:23:52
-
Adaptive Fuzzy Switching Noise Reduction Filter for Iris Pattern Recognition
摘要: Noise reduction is a necessary procedure for the iris recognition systems. This paper proposes an adaptive fuzzy switching noise reduction (AFSNR) filter to reduce noise for iris pattern recognition. The proposed low complexity AFSNR filter removes noise pixels by fuzzy switching between an adaptive median filter and the filling method. The threshold values of AFSNR filter are calculated on the basis of the histogram statistics of eyelashes, pupils, eyelids, and light illumination. The experimental results on the CASIA V3.0 iris database, with genuine acceptance rate equals 99.72%, show the success of the proposed method.
关键词: fuzzy switching median,iris normalization,eyelash detection,fuzzy weighted median,noise reduction,Iris pattern recognition
更新于2025-09-23 15:23:52
-
[IEEE 2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA) - Poznan, Poland (2018.9.19-2018.9.21)] 2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA) - On the influence of the image normalization scheme on texture classification accuracy
摘要: Texture can be a very rich source of information about the image. Texture analysis finds applications, among other things, in biomedical imaging. One of the widely used methods of texture analysis is the Gray Level Co-occurrence Matrix (GLCM). Texture analysis using the GLCM method is most often carried out in several stages: determination of areas of interest, normalization, calculation of the GLCM, extraction of features, and finally, the classification. Values of the GLCM based features depend on the choice of the normalization method, which was examined in this work. The normalization is necessary, since acquired images often suffer from noise and intensity artifacts. Certainly, the normalization will not eliminate these two effects, however it was demonstrated, that its application improves texture analysis accuracy. The aim of the work was to analyze the influence of different normalization methods on the discriminating ability of features estimated from the GLCM. The analysis was performed both for Brodatz textures and real magnetic resonance data. Brodatz textures were corrupted by three types of distortion: intensity nonuniformity, Gaussian noise and Rician Noise. Three types of normalizations were tested: min?max, 1?99% and +/?3σ.
关键词: normalization,classification,image processing,texture analysis,GLCM
更新于2025-09-23 15:23:52
-
A Low-Light Sensor Image Enhancement Algorithm Based on HSI Color Model
摘要: Images captured by sensors in unpleasant environment like low illumination condition are usually degraded, which means low visibility, low brightness, and low contrast. In order to improve this kind of images, in this paper, a low-light sensor image enhancement algorithm based on HSI color model is proposed. At ?rst, we propose a dataset generation method based on the Retinex model to overcome the shortage of sample data. Then, the original low-light image is transformed from RGB to HSI color space. The segmentation exponential method is used to process the saturation (S) and the specially designed Deep Convolutional Neural Network is applied to enhance the intensity component (I). At the end, we back into the original RGB space to get the ?nal improved image. Experimental results show that the proposed algorithm not only enhances the image brightness and contrast signi?cantly, but also avoids color distortion and over-enhancement in comparison with some other state-of-the-art research papers. So, it effectively improves the quality of sensor images.
关键词: convolutional neural network,Retinex model,image enhancement,color model,batch normalization,feature learning
更新于2025-09-23 15:21:01
-
Optimization of measuring procedure of farmland soils using laser-induced breakdown spectroscopy
摘要: Laser-induced breakdown spectroscopy (LIBS) is an emerging multi-elemental analytical technique offering fast and simultaneous quantification of soil properties with minimal sample preparation and effective cost. Due to soil heterogeneity, spectral variation however limits the quantitative robustness. In this study, 348 soil samples were collected and prepared for acquisition of LIBS spectra. Influences of shot layer and number on LIBS quality were evaluated by spectral intensity and relative standard deviation (RSD). Effects of shot layer and number and five normalization procedures on LIBS ability to measure soil organic matter (SOM), total nitrogen (TN), and total soluble salt content (TSC), were evaluated using partial least squares regression (PLSR). Increasing shot number reduced LIBS spectral variance, thereby improving the quantitative accuracy of selected soil properties. Deep shot layers (4th or 5th shot layers) reduced the intensities of soil spectra and thereby decreased the quantitative accuracy for TSC. However, deep shot layers improved the SOM and TN prediction performances. Among the normalization approaches, the method based on the correction of Si line (DS) showed superior performance for improving quantitation of SOM and TN. The arithmetic average method (AA) was best for TSC prediction. Optimization of shot layer, number and normalization procedures of LIBS spectra resulted in fair prediction of SOM (residual prediction deviation of validation set, RPDV = 1.608), good prediction of TN (RPDV = 1.836), and very good quantitative analysis of TSC (RPDV = 2.456). Therefore, our findings illustrate very good potential for improving the quantitative accuracy of the LIBS soil spectra.
关键词: quantitative analysis,shot layer,soil properties,shot number,normalization methods,Laser-induced breakdown spectroscopy
更新于2025-09-23 15:19:57
-
A Decision Support Tool For Early Detection of Knee OsteoArthritis using X-ray Imaging and Machine Learning: Data from the OsteoArthritis Initiative
摘要: This paper presents a fully developed computer aided diagnosis (CAD) system for early knee OsteoArthritis (OA) detection using knee X-ray imaging and machine learning algorithms. The X-ray images are first preprocessed in the Fourier domain using a circular Fourier filter. Then, a novel normalization method based on predictive modeling using multivariate linear regression (MLR) is applied to the data in order to reduce the variability between OA and healthy subjects. At the feature selection/extraction stage, independent component analysis (ICA) is used in order to reduce the dimensionality. Finally, Naive Bayes and random forest classifiers are used for the classification task. This novel image-based approach is applied on 1024 knee X-ray images from the public database OsteoArthritis Initiative (OAI). The results show that the proposed system has a good predictive classification rate for OA detection (82.98 % for accuracy, 87.15 % for sensitivity and up to 80.65 % for specificity).
关键词: Computer Aided diagnosis System,Intensity Normalization,Classification,OsteoArthritis
更新于2025-09-19 17:15:36
-
Improved measurement on quantitative analysis of coal properties using laser induced breakdown spectroscopy
摘要: It is of great significance to realize the rapid or online analysis of coal properties for combustion optimization of thermal power plants. In this work, a set of calibration schemes based on laser-induced breakdown spectroscopy (LIBS) was determined to improve the measurement on quantitative analysis of coal properties, including proximate analysis (calorific value, ash, volatile content) and ultimate analysis (carbon and hydrogen). Firstly, different normalization methods (channel normalization and normalization with the whole spectral area) combined with two regression algorithms (partial least-squares regression [PLSR] and support vector regression [SVR]) were compared to initially select the appropriate calibration method for each indicator. Then, the influence of de-noising by the wavelet threshold de-noising (WTD) on quantitative analysis was further studied, thereby the final analysis schemes for each indicator were determined. The results showed that WTD coupled SVR can be well estimated calorific value and ash, the root mean square error of prediction (RMSEP) were 0.80 MJ kg?1 and 0.60%. Coupling WTD and PLSR performed best for the measurement of volatile content, the RMSEP was 0.76%. For the quantitative analysis of carbon and hydrogen, normalization with the whole spectral area combined with SVR can get better measurement results, the RMSEP of the measurements were 1.08% and 0.21%, respectively. The corresponding average standard deviation (RSD) for calorific value, ash, volatile content, carbon and hydrogen of validation sets were 0.26 MJ kg?1, 0.57%, 0.79%, 0.47% and 0.08%, respectively. The results demonstrated that the selection of appropriate spectral pre-processing coupled with calibration strategies for each indicator can effectively improve the accuracy and precision of the measurement on coal properties.
关键词: partial least-squares regression (PLSR),quantitative analysis,normalization,Laser-induced breakdown spectroscopy (LIBS),coal properties,support vector regression (SVR),wavelet threshold de-noising (WTD)
更新于2025-09-19 17:13:59
-
Iolite Based Bulk Normalization as 100% (m/m) Quantification Strategy for Reduction of Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry Transient Signal
摘要: Iolite package draw more attention in laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) community in recent years due to its powerful data-handling capacity, excellent signal visualization and open source of calculation codes. In this study, the application of Iolite package was investigated for LA-ICP-MS elemental quantification, and a calculation code for the bulk normalization as 100% (m/m) strategy was compiled. We found that the spline interpolation approach was better than that of linear one for the correction of time-dependent instrument drift. BCR-2G as the quality material was used to assess the proposed code, and the results revealed that the code was practical and reliable. The analytical accuracy was influenced by the used calibration materials. TiO2, MgO, K2O and rare earth elements in BCR-2G were slightly off (5%–10%) when NIST SRM 610 as the calibrator. Cr and Mo were higher (10%–30%) than the recommended values when StHs6/80-G was used as the calibrator. The phenomena would be attributed to the matrix effect or the inaccurate values of corresponding calibrators. Three main sources for the LA-ICP-MS combined uncertainty were recognized, including the uncertainty of recommended values of analytes in calibration material, the uncertainty of measured intensity ratios in sample and the error in bulk normalization as 100% (m/m) strategy. A total of 50 elements in CGSG glass reference materials were quantified based on the proposed Iolite code. Major elements (except MnO, CaO and P2O5) matched well with the recommended values with a discrepancy of 5%, and the trace elements (except Cr, Ni, Zn, Ga, Mo and Sb) were agreement with the recommended values in 10%. The dataset reported in this study was helpful for the value certification of CGSG reference materials. Overall, the proposed Iolite code broadened the application of Iolite package in the reduction of LA-ICP-MS transient signal for the elemental determination.
关键词: CGSG reference material,Combined uncertainty,Laser ablation-inductively coupled plasma-mass spectrometry,Iolite package,Bulk normalization as 100% (m/m)
更新于2025-09-19 17:13:59