Boson sampling [1] offers a promising route towards a demonstration of a quantum advantage, i.e. a computational task which can be performed faster in real time by a quantum computer than by a classical one. The task consists of sampling from the output distribution of a network of linear interferometers fed with single photons. For a sufficiently large number of photons, it is strongly believed that a quantum device implementing this problem directly will outperform a classical computer simulating the experiment.

One major limitation in the field of boson sampling has been the relatively poor understanding of how experimental imperfections compromise the computational hardness. Such imperfections may include photon distinguishability, photon loss, and noise in the interferometers and detectors. Understanding the impact of these imperfections would elucidate which aspects of photonic technologies have to be improved to achieve large-scale boson sampling.

In this work, we provide upper bounds on the strength of imperfections which are permissible in a boson sampling experiment. We do this by presenting classical algorithms which exploit two of the major imperfections in photonics, namely photon distinguishability and photon loss.

For distinguishability, we will present an algorithm [2] in which interference of partially distinguishable photons is broken up into interference of fewer, perfectly indistinguishable photons. This number of effective photons is only a function of the distinguishability, not of the number of photons used, thereby limiting the number of partially distinguishable photons that can be meaningfully interfered.

For photon loss, we will show two algorithms. First, we will show an algorithm [3] which is able to simulate any boson sampling experiment where the losses scale quadratically with the number of photons. This includes all photonic chip experiments, which due to the scaling of path lengths with the number of photons exhibit exponential losses. Finally, we will show results on an algorithm which simulates boson sampling with linear losses, i.e. where the probability of losing a photon is independent of the number of photons.

Together,these results form a minimum set of specifications that any boson sampling experiment must clear in order to achieve computational hardness.

[1] S. Aaronson, A. Arkhipov, “The computational complexity of linear optics”, TheoryComput. **9**:143–252.

[2] J.J. Renema*,* A. Menssen, “Efficient algorithm for boson sampling with partially distinguishable photons”,* Phys. Rev. Lett. *120 (22), 220502* *(2018).

[3] R. García-Patrón, J.J. Renema, V. Shchesnovich, “Simulating boson sampling in lossy architectures”, preprint: arXiv:1712.10037

Quantum information processing and computing , Quantum optics and non-classical light sources