**6. Conclusions**

We considered in this article the expressive power of ReLU networks with bounded hidden layer widths. In particular, we showed that ReLU networks of width *d* + 3 and arbitrary depth are capable of arbitrarily good approximations of any scalar continuous function of *d* variables. We showed further that this bound could be reduced to *d* + 1 in the case of convex functions and gave quantitative rates of approximation in all cases. Our results show that deep ReLU networks, even at a moderate width, are universal function approximators. Our work leaves open the question of whether such function representations can be learned by (stochastic) gradient descent from a random initialization. We will take up this topic in future work.

**Funding:** This research was funded by NSF Grants DMS-1855684 and CCF-1934904.

**Acknowledgments:** It is a pleasure to thank Elchanan Mossel and Leonid Hanin for many helpful discussions. This paper originated while I attended EM's class on deep learning [18]. In particular, I would like to thank him for suggesting proving quantitative bounds in Theorem 2 and for suggesting that a lower bound can be obtained by taking piece-wise linear functions with many different directions. He also pointed out that the width estimates for the continuous function in Theorem 1 were sub-optimal in a previous draft. I would also like to thank Leonid Hanin for detailed comments on several previous drafts and for useful references to the results in approximation theory. I am also grateful to Brandon Rule and Matus Telgarsky for comments on an earlier version of this article. I am also grateful to BR for the original suggestion to investigate the expressivity of neural nets of width two. I also would like to thank Max Kleiman-Weiner for useful comments and discussion. Finally, I thank Zhou Lu for pointing out a serious error what used to be Theorem 3 in a previous version of this article. I have removed that result.

**Conflicts of Interest:** The authors declare no conflict of interest.
