研究目的
To address the problem of processing large scale image data on small quantum devices by proposing a hybrid quantum-classical framework that does not require dataset-specific manual pre-processing and can handle grayscale and RGB images.
研究成果
The proposed hybrid quantum-classical framework successfully processes large images on small quantum devices without dataset-specific manual pre-processing, achieving comparable classification accuracy to classical methods. It is scalable to future quantum hardware and applicable to various image types, including medical imaging. Future work should focus on handling 3D and 4D data and improving compression to minimize information loss.
研究不足
The framework is limited by the current quantum hardware (D-Wave 2000Q with up to 2048 qubits), which restricts the maximum RBM size to 64x24, potentially leading to information loss in compression. It does not demonstrate quantum advantage over classical methods and may not scale well to very large or complex datasets without further hardware improvements.
1:Experimental Design and Method Selection:
The framework involves data compression using a convolutional autoencoder, quantum pre-training of a Restricted Boltzmann Machine (RBM) on a D-Wave quantum annealer, and classical training of a neural network for image classification. The autoencoder compresses images to a size suitable for quantum processing, the RBM is trained using quantum sampling to replace classical Gibbs sampling, and the neural network is initialized with RBM weights for classification.
2:Sample Selection and Data Sources:
Four datasets are used: MNIST (grayscale 28x28 images of handwritten digits), Fashion-MNIST (grayscale 28x28 images of clothing), a medical imaging dataset (650 grayscale 512x512 images including fluoroscopic and venogram images), and a laparoscopic tool dataset (1000 RGB 596x596 images of medical tools).
3:List of Experimental Equipment and Materials:
D-Wave 2000Q quantum annealer (up to 2048 qubits), classical computers for autoencoder and neural network training, and standard machine learning libraries (e.g., for RMSProp optimizer and Adadelta optimizer).
4:Experimental Procedures and Operational Workflow:
First, train an autoencoder to compress images. Second, use the compressed data to train an RBM on the D-Wave device with 5000 sampling repetitions. Third, initialize a neural network with the RBM weights and train it classically for image classification. Evaluate using accuracy metrics over epochs.
5:Data Analysis Methods:
Classification accuracy is computed as the number of successful classifications over total classifications. Training curves and loss functions (mean absolute error for autoencoder, binary crossentropy for classifier) are analyzed. Comparisons are made with different initialization methods (classical RBM, quantum RBM, constant, Glorot).
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容