In a recent paper [Goy et al., Phys. Rev. Lett. 121, 243902, 2018], we showed that deep neural networks (DNNs) are very efficient solvers for phase retrieval problems, especially when the photon budget is limited. However, the performance of the DNN is strongly conditioned by a preprocessing step that consists in producing a proper initial guess. In this paper, we study the influence of the preprocessing in more details, in particular the choice of the preprocessing operator. We also empirically demonstrate that, for a DenseNet architecture, the performance of the DNN increases with the number of layers up to a point after which it saturates.
|