# View from the arXiv: Jul 4 - Jul 8 2022

A summary of new preprints appearing on arXiv during the week of July 4th to July 8th 2022

Welcome to ‘View from the arXiv’, where each week I’ll put together a short list of new preprints which have appeared on arxiv.org during the week which I’ve found interesting. I’ll focus on the categories ‘*Disordered Systems and Neural Networks*’, ‘*Quantum Gases*’ and ‘*Strongly Correlated Electrons*’, and in particular the first two as these are the main areas of the arXiv which I follow. This is an entirely subjective list of things which appeal to me, and of course there are far too many interesting papers to be able cover all of them so I just choose one or two from each day to highlight here. As these are all preprints which have not yet been peer-reviewed, remember to take any claims and conclusions with a grain of salt and be sure to cast a critical eye over the work if you’re interested in more details. (And let’s face it, this caveat should be applied to any published work too…!)

You can also subscribe to this as a *free* newsletter if you’d like it e-mailed to you every Monday morning!

**Note**: *View from the arXiv* will be taking a short hiatus for a few weeks, as I’ll be on vacation, visiting my home country and my family for the first time since late 2019. I may put out a partial list of papers from the beginning of next week, but I’ll be away for a few weeks after that and won’t be keeping up with the arXiv. So, please be patient and forgive the downtime, and I’ll be back with you again at the beginning of August!

**July 4th**

*The Disordered Heterogeneous Universe: Galaxy Distribution and Clustering Across Length Scales, by Oliver H. E. Philcox, and Salvatore Torquato*: It’s not often that we see a paper in the ‘disordered systems’ category of arXiv that studies objects as large as galaxies, so this paper really caught my eye. This work takes concepts from statistical physics and studies of disordered media, and applies it to try to understand the formation of large-scale structure in the universe, particularly the distribution of galaxies in space. The authors are very clear that this is a simplified proof-of-concept study that neglects some important effects needed to give an accurate description of the way galaxies are distributed, but it’s nonetheless a really interesting application of tools developed in the statistical physics community to a very different sort of problem. If more realistic details can be incorporated in this theory, it could be an extremely innovative approach to the study of galaxies on large length scales.

*Thirty milliseconds in the life of a supercooled liquid, by Camille Scalliet, Benjamin Guiselin, and Ludovic Berthier*: Thirty milliseconds doesn’t sound like much, but for some physical systems, a lot can happen in that time! Paradoxically, studying and simulating this very short timescale takes a huge amount of computational time and effort - the simulations reported in this paper took an entire month just to simulate those 30ms! The reason is that supercooled liquids (i.e. liquids which are carefully cooled below their freezing point) exhibit extremely complicated dynamics, which require huge amounts of numerical effort to solve. This work sheds some interesting light on slow dynamics close to the glass transition in certain materials, and points the way towards future research directions for the field.

**July 5th**

*Synthetic gauge field in two interacting ultracold atomic gases without an optical lattice, by J. Mumford*: Synthetic gauge fields are artificial degrees of freedom that can be added to simulations or experiments than mimic the effects of some other physical field, for example using periodic drive to engineer synthetic dimensions such that a one-dimensional driven system behaves in some ways like a two-dimensional static system. This work investigated synthetic gauge fields in ultracold atomic gases, using two interacting Bose-Einstein condensates subject to periodic drive, and shows that this system mimics the physics of a particle on a two-dimensional lattice in the presence of magnetic field. The manuscript focuses on topological effects, and is an interesting study of how synthetic gauge fields may be able to help engineer exotic topological systems and tailor-made properties for the technologies of the (near) future.

*New trends in quantum integrability: Recent experiments with ultracold atoms, by Xi-Wen Guan, and Peng He*: This paper is a nice review of integrable quantum systems in 1D, with a particular emphasis on experiments which can be (or have been) performed with ultracold atomic gases. An integrable system is one with an extensive number of conserved quantities: the presence of a large number of conservation laws constrains the non-equilibrium dynamics of the system and (usually) prevents it from reaching a thermal equilibrium state at long times, as would be expected in a non-integrable system. Many integrable systems are exactly solvable (using, for example, the Bethe ansatz), but just because a solution is possible does not mean that it’s easy. Current frontiers in the field involve using concepts like Generalised Gibbs Ensembles (GGEs) and generalised hydrodynamics (GHD) to understand the long-time dynamics of integrable systems, and given the remarkable experimental progress in recent years, there is a rapid feedback loop between experiments and theory in this field. This review is quite timely, then, and a great way to catch up if you’ve not been keeping up with recent developments in integrable systems!

*Quantum speed limit for states with a bounded energy spectrum, by Gal Ness, Andrea Alberti, and Yoav Sagi*: Understanding the propagation of energy and information in quantum systems is a hugely important area of research, particularly for the development of quantum technologies which will rely on operations which manipulate and transport information. One key area of research is effective ‘speed limits’ in quantum systems, i.e. understanding how fast particular operations can happen, and deriving mathematical bounds that can act as speed limits. This work builds on several earlier works to add a new speed limit into the current pantheon, this time relying on systems with energy spectra which are bounded from above (i.e. have a well-defined maximum energy). The authors show that by considering the difference between this maximum energy and the mean energy of the system, a new speed limit can be derived, complementary to some of the previously known limits. While a bit technical and potentially tricky to follow, the result of this work is pretty nice and very interesting.

**July 6th**

*Level statistics of real eigenvalues in non-Hermitian systems, by Zhenyu Xiao, Kohei Kawabata, Xunlong Luo, Tomi Ohtsuki, and Ryuichi Shindou*: Non-Hermitian systems are host to complex eigenvalues, in contrast with the purely real eigenvalues found in typical Hermitian models, and are increasingly commonly studied as proxies for dissipative, open systems (as the imaginary part of the eigenvalue acts a bit like a decay term). This work studies the statistics of the energy spectra of non-Hermitian systems, and in particular looks at the properties of the *real* eigenvalues (i.e. not the complex ones). Level statistics are commonly used diagnostics of localisation and quantum chaos, and this work is a careful, systematic study of level statistics in non-Hermitian systems. It’s a bit dry, but categorising these models into symmetry classes is important work that may lead to a more unified understanding of non-Hermitian systems, by grouping systems together in terms of their universal behaviour rather than the specifics of any given model.

**July 7th**

*Many-body localized hidden Born machine, by Weishun Zhong, Xun Gao, Susanne F. Yelin, and Khadijeh Najafi*: So-called Born machines are something I have only recently become aware of, and don’t fully understand. They are ‘generative models’ used to learn an unknown probability distribution, then sample from it to generate new data. The crux of this paper is that the interplay of disorder and interactions – the key ingredients of many-body localisation – can be used to improve the performance of the ‘hidden Born machine’ proposed by the authors, with the ultimate idea here being that many-body localisation could be an asset in designing Born machines which are better able to learn the target dataset. I find this extremely interesting, but don’t understand enough learning theory to say much more about this - I definitely want to learn more about it though.

**July 8th**

*When do conservation laws improve the efficiency of the Density Matrix Renormalization Group?, by Thomas G. Kiely, and Erich J. Mueller*: This might be of interest to a pretty niche audience of people who use Density Matrix Renormalisation Group (DMRG) on a daily basis, but the question in the title is an interesting one. Normally in DMRG and other numerical methods, conservation laws and symmetries are an asset as they allow us to break the problem into different symmetry sectors and solve them all independently, i.e. turning one big problem into lots of little ones that can be more efficiently solved. The idea of this paper is to show that making use of such conservation laws is not always optimal, and that in the case of spontaneously broken symmetries, there may be more efficient ways of encoding the desired state in matrix product state form. The authors call for a greater awareness of spontaneous symmetry breaking and conservation laws in numerical studies, and suggest that keeping these factors in mind could lead to more efficient numerical simulations.

*Tensor networks in machine learning, by Richik Sengupta, Soumik Adhikary, Ivan Oseledets, and Jacob Biamonte*: Another review to round off the week, this one about the role of tensor networks in machine learning. As someone who’s used tensor networks to simulate quantum systems, but knows next to nothing about machine learning, I find this an interesting ‘way in’ to understand a bit more about machine learning from the point of view of tensor networks, a language that I’m already a bit familiar with. If I understand the basics, the idea is that optimising the parameters of a tensor network can be recast as a learning procedure, opening the door to using tensor network optimisation techniques to generate efficient ways of doing machine learning - as I said though, I’m no expert in this, so take the above with a grain of salt and give the paper a read for yourself if you’d like to understand more!