Skip to content

qdevpsi3/quantum-arch-search

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Quantum Architecture Search via Deep Reinforcement Learning

paper packages license exp

Description

This repository contains an unofficial implementation of the Quantum Architecture Search environments and its applications as in :

  • Paper : Quantum Architecture Search via Deep Reinforcement Learning
  • Authors : En-Jui Kuo, Yao-Lung L. Fang, Samuel Yen-Chi Chen
  • Date : 2021

The customized Gym environments are built using Google Quantum Cirq.

Experiments

The experiments in the paper are reproduced using the reinforcement learning agents provided by Stable-Baselines3. You can run the notebook locally or use this Google Colab link.

Details

The agent design the quantum circuit by taking actions in the environment. Each action corresponds to a gate applied on some wires. The goal is to build a circuit U such that generates the target n-qubit quantum state that belongs to the environment and hidden from the agent. At each time-step, the agent receives a penalty if the fidelity between the resulting quantum state and the target is below some threshold. Otherwise, the game stops and the agent receives the fidelity minus the penalty. The environment state is computed using a pre-fixed set of observables.

This repository contains the implementation of all the environments in the paper :

  • 2-qubit target and its noisy variant
  • 3-qubit target and its noisy variant

Moreover, a more general environment is provided that can be built using any n-qubit target, any set of environment actions (gates) and any set of environment states (observables).

Setup

To install, clone this repository and execute the following commands :

$ cd quantum-arch-search
$ pip install -r requirements.txt
$ pip install -e .

Environments

Names

The full list of the environments :

  • Basic : 'BasicTwoQubit-v0', 'BasicThreeQubit-v0', 'BasicNQubit-v0'
  • Noisy : 'NoisyTwoQubit-v0', 'NoisyThreeQubit-v0', 'NoisyNQubit-v0'

Parameters

Their corresponding parameters are :

Parameters Type Explanation Basic Noisy
target numpy.ndarray target quantum state of size 2^n x x
fidelity_threshold float fidelity threshold, default : 0.95 x x
reward_penalty float reward penalty, default : 0.01 x x
max_timesteps int max circuit size, default : 20 x x
error_rate float measurement and gate errors, default : 0.001 x

Target

By default, the target is set to :

  • Bell state for the 2-qubit environments
  • GHZ state for the 3-qubit environments

Gates (Actions)

The set of actions is :

Observables (States)

The set of states is defined by measuring the circuit output using the following quantum observables :

Example

Initialization

You can simply create your environment using :

import gym
import qas_gym

target = np.asarray([0.70710678+0.j,0. +0.j,0. +0.j, 0.70710678+0.j])

env = gym.make('BasicTwoQubit-v0', target=target, fidelity_threshold=0.95)

Evaluation

You can train your agent and evaluate it using :

state = env.reset()
done = False
while not done:
    action = agent.predict(state)
    state, reward, done, info = env.step(action)
    env.render()

The info dictionary contains the Cirq circuit and the Fidelity measure.

Rendering

You can also render the environment to see how the agent is acting.

import time
from IPython.display import clear_output

env = gym.make('BasicTwoQubit-v0')

_ = env.reset()
for _ in range(10):
    action = env.action_space.sample()
    _ = env.step(action)
    # render
    clear_output(wait=True)
    env.render()
    time.sleep(1)

About

Cirq/PyTorch implementation of Quantum Architecture Search via Deep Reinforcement Learning by (Kuo et al., 2021)

Topics

Resources

License

Stars

Watchers

Forks

Languages