Degree of recurrence of generic diffeomorphisms

- Sorbonne Université

*Discrete Analysis*, February. https://doi.org/10.19086/da.7545.

### Editorial introduction

Degree of recurrence of generic diffeomorphisms, Discrete Analysis 2019:1, 43 pp.

The theory of discrete-time dynamical systems concerns iterations of maps f:X→X, where X is a space of some kind (e.g. a topological space or a measure space) and f is a map that satisfies certain assumptions (e.g. continuity or measure preservation). A central class of dynamical systems is the class where X is a manifold and f satisfies differentiability assumptions.

Since the behaviour of dynamical systems can be very complex, it is often useful to simulate dynamical systems numerically. But then the question immediately arises of how faithful a simulation can be. In general, the answer is far from obvious, since in a numerical simulation one is forced to make approximations, and yet, as the famous phrase “butterfly effect” suggests, small changes to a function can have large repercussions if one iterates the function many times.

A simple example that illustrates the difficulties is the *baker’s transformation*, which is a map from [0,1)2 to itself that takes the point (x,y) to the point (2x,y/2) if 0≤x<1/2 and to the point (2x−1,(y+1)/2) if 1/2≤x<1. (One can imagine the square as a piece of bread: the baker’s transformation cuts it in half vertically, places the right half on top of the left half, and squashes the resulting rectangle back down to a square.)

The baker’s transformation is a measure-preserving bijection, but if one discretizes it by approximating all numbers to n places in their binary representations, then repeatedly doubling x mod 1 will after n steps make it zero, and after that halving y n times will make it zero as well. So after 2n steps, everything maps to zero.

This paper is concerned with diffeomorphisms of the torus T, and examines what happens if one discretizes a diffeomorphism f:T→T by taking the n×n grid Xn of all points (x,y) with coordinates that are multiples of 1/n and replacing f by the function g:Xn→Xn that maps (x,y) to the closest point in Xn to f(x,y) (making some sensible choice to break ties).

There are many ways that one might choose to measure the success of a discretization. The approach here is to consider the limit dn of the densities of the images gmn(Xn) in Xn. Since Xn is finite, these images stabilize after a finite time, and the limiting set is the union of all the periodic orbits of gm. One then takes the limit of the densities dn as n tends to infinity and regards it as a measure of how successfully one can discretize f: the larger the limit, the more information has been retained and therefore the more successful the discretization.

The expectation is that the discretizations will *not* be very successful in general. However, it is easy to construct examples for which the limiting density is 1, so in order to understand the general situation it is helpful to consider functions f that are generic in some sense, and the most convenient sense turns out to be that the category one: that is, one attempts to prove results that hold for all functions f outside some meagre set.

The main result of the paper is to show that adding differentiability assumptions to f makes a big difference. If f is a generic measure-preserving homeomorphism, then the sequence of densities dn is dense in the interval [0,1]. However, this paper shows that if r>1 and f is a generic measure-preserving Cr diffeomorphism, then dn→0, which we can interpret as saying that typically all the information is lost when we discretize.

As the author admits, there are many ways one might consider measuring the quality of a discretization, not to mention more sophisticated discretizations, some of which might be more appropriate from the point of view of somebody who actually wanted to simulate a dynamical system. However, even a simple measure such as this one gives some insight, and the proofs of the results are interesting in their own right. Very roughly, the idea is to look at the “mesoscopic scale” – that is, a larger scale than that of the grid points but a smaller scale than that of the map in its entirety – at which f is approximately affine. The question is reduced to one about generic sequences of linear maps, and the proof involves a notion known as a “model set”, which belongs to a theoretical framework usually used to study quasicrystals, as well as a nontrivial application of Minkowski’s first theorem from the geometry of numbers.