I'm an associate professor ("maître de conférences") in MAP5 in Paris. I'm interested mostly in probability and statistical physics, and I work a lot on time scales of random processes.

I did my PhD at the LPSM with Cristina Toninelli on interacting particle systems, and specifically on kinetically constrained models. This is a family of models studied in physics in order to better understand jamming of glassy and granular materials. You can see my thesis here.
I continued the same line of work during a postdoc in Roma Tre University's probability group.

I studied for a B.Sc. in mathematics and physics at the Technion in Haifa (which is also my home town). Other than classes, I also got to work on two research projects, one in probability with Itai Benjamini at the Weizmnn Institute, and the other in experimental physics at Bielefeld university with Armin Gölzhäuser. After my bachelor's, I moved to Paris and studied for a master's degree in theoretical physics at the ENS. This was a two year program, with two research internships, one for each year. The first year I worked at the LPMA (which turned into LPSM), with Giambattista Giacomin and Cristina Toninelli. Both projects were in mathematical statistical physics, from the mathematical side of it. For my second year's internship I worked with Kay Wiese at the LPTENS, still on statistical physics but from its physics side.

Kinetically constrained lattice gases are models constructed in order to study glassy dynamics. They could be seen as a conservative version of kinetically constrained spin models.
In a kinetically constrained lattice gas, particles live on the sites of $\mathbb{Z}^d$. Two particles aren't allowed to be at the same site, so all sites are either empty or occupied.
In the dynamics, particles are allowed between sites. The model is called kinetically constrained because not all jumps are allowed. To be more accurate, particles only jump when there's plenty of free space around them.

Adding this constraint slows down the system, especially when the density is large. If a particle wants to move, first we need to free up some space by emptying sites around it. But if these sites are occupied, we need to free up space around each of them and move the particles away. To do that we need to empty more sites, and move more particles, and so on. This only ends when there are already enough empty sites and we don't need to do anything in order to move the disruption.
The number of steps it takes until we manage to move the particle depends on the exact definition of the constraint.
In some models, this procedure might never end. In other models we have to move plenty of particles in a very specific, coordinated, way.

Noncooperative models are models where some small region with many vacancies could move around freely. Such a region is called a mobile cluster. So if we want to move a particle, what we need to do is to look around for a mobile cluster, then move it close to our particle. If this is done the right way, then thanks to the mobile cluster there are enough vacancies around the particle so it's allowed to jump.

The general question I asked about these models is the way time scales in large systems. Because of the constraint we expect time scales are large when the density is high. I proved a few results that say that time scales are diffusive. This means the basic relation $\text{time} \propto \text{space}^2$ isn't violated by the constraint. The slowing down it causes only affects the coefficient. The results are general for all noncooperative models, so the next step would be to understand cooperative models better.

Topologically induced metastability in a periodic XY chain

The purpose of this project was to try to approach a model with some non-trivial topology from the dynamics point of view. The most famous model with this type of behavior is the XY model in two dimensions. In this model there's a critical temperature, where isolated vortices start to appear. Analyzing this model is very difficult already in equilibrium, and adding dynamics on top of that complicates things even more. We've decided to study a one dimensional chain, that still has some interesting topological behavior but is much easier to understand.

So we take a chain of rotors, $X_1,\dots,X_N$, each lives on the cycle $\mathbb{R}/2\pi\mathbb{Z}$. They interact via the XY Hamiltonian, $H=-J\sum_{i=1} \cos(X_{i}-X_{i-1})$, and in order to see interesting topological effects we take periodic boundary conditions ($X_0=X_N$). When the temperature is fixed and the size of the system goes to infinity, it becomes trivial, because fixing one rotor to whatever value we want costs a finite energy, and once we do that the spins to the right don't depend anymore on the spins to the left. So correlations decay exponentially fast, and we can't expect to see any global behavior. If we do want to see topological effects, we should take the interaction strength $\beta J$ to infinity with $N$.

Imagine that, for some fixed large $N$, the interaction $\beta J$ is so big, that it forces all angles $X_i-X_{i-1}$ to be very small. This way, the rotors look like a continuous field – give each point $t\in \mathbb{R}/\mathbb{Z}$ the value $X_{\lfloor Nt \rfloor}$. When $N$ is large, we can think of this map as (approximately) a continuous function from $\mathbb{R}/\mathbb{Z}$ to $\mathbb{R}/2\pi\mathbb{Z}$. If this is indeed the case, that function will have a winding number. The winding number is an integer, and it doesn't change under continuous deformations of the field, so if the dynamics we take is continuous, the winding number will stay fixed. An interesting aspect of this behavior is that it could keep the system out of equilibrium. The minimal energy is when the winding number is $0$, so when $\beta J$ is large the equilibrium should concentrate there. So if we start with a configuration whose winding number is non-zero the system could never reach equilibrium, which means that the state is a (meta)stable non-equilibrium state. This will remain true until the continuous field approximation breaks down, and only then the winding number would be able to change.

This approximation of a continuous field is based on the assumption that all the angles between neighboring rotors are small. It's actually not difficult to see that the winding number of the discrete chain could only change when two neighboring rotors point at opposite directions. This costs a free energy barrier that has two parts – the energy difference $2J$, and an entropy term $\log N$ coming from the fact that there are $N$ possible pairs of rotors. In the paper we analyzed the Itô process given by $\text{d}X(t)=-\nabla H \text{d}t + (2 \beta)^{-1/2} \text{d}B(t)$, which is reversible with respect to the XY measure. We proved that for this process the time it takes to change winding number behaves (roughly) like $\exp(2\beta J - \log N)$.

In the picture you can see the winding number as a function of time for a certain realization of the process. The animation shows how the winding number changes (for a different realization). Look for the two rotors that point in opposite directions! (code)

The Kob-Andersen model is an interacting particle system evolving according to the Kawasaki dynamics, i.e., particles live on the vertices of $\mathbb{Z}^d$, and could jump to a neighboring site. In this particular model the particle could jump only to an empty site, and only if the number of empty neighbors it has passes some fixed threshold both before and after the jump. This constraint slows down the system, and the larger the density its affect is stronger. Simulations run by physicists showed that when the density is high enough this effect is so drastic, that they conjectured that there is some critical density, and above that density the model is not diffusive any more (so when looking at distances of order $N$ the relevant time scale is strictly longer than order $N^2$).
Whether or not a model is diffusive depends on the exact observable we're interested in. Each of the three papers in this series treats a different meaning of being diffusive, but the result in all of them is similar — the model is diffusive, and when the density is high the diffusion coefficient decays extremely fast (and in the same manner for all three observables).
In the first paper, we analyzed the relaxation time, which describes how long correlations take to decay in finite volume $N^d$. We showed that it grows like $C N^2$, meaning that it's diffusive, and $1/C$ is the diffusion coefficient.
In the second paper we studied at the path of a single tagged particle when the system is in its stationary state. If we look from very far away, so one lattice distance has length $1/N$, and fast forward the dynamics by a factor $N^2$, the path of the tagged particle looks like a Brownian motion. This is a type of diffusive scaling, and the diffusion coefficient is the one for the Brownian motion.
The third paper talks about the hydrodynamic limit of the model — take the model in finite volume $N^d$, with some initial density profile that's not constant. In the classical sense, hydrodynamic limit means that if we speed up time by $N^2$ the density profile solves (in the limit) a hydrodynamic equation. Then the model is diffusive if this hydrodynamic equation is non-degenerate, that is, the diffusion coefficient is non-zero. This can't happen in the Kob-Andersen model (at least for general initial density), because there's always a possibility for a blocked structure, when no particle has enough empty neighbors and the dynamics is stuck. This effect though is not very physical, since it's very noise intolerant — I prove in the paper that with a very small perturbation (that could be as small as we want) the model converges to a hydrodynamic limit. The dependence of the diffusion coefficient on the magnitude of the perturbation is negligible, so if we take it to $0$ (after taking $N$ to infinity) we get a non-degenerate hydrodynamic equation.

Kinetically constrained models in random environments

Kinetically constrained models is a family of interacting particle systems that try to explain certain aspects of glasses and granular materials. In order to do that, we think of particles that live on the vertices (sites) of some graph, usually at most one particle per site. Particles are allowed to appear and disappear, but only if there are enough empty sites near by (so the neighborhood isn't too dense). The exact meaning of "enough empty sites" depends on the model, for example in the Fredrickson-Andersen $j$-spin facilitated model (FAjf) it means that the site has at least $j$ empty neighbors.

In these two works I wanted to understand how typical time scales diverge when the density of particles is very high. It's a question that's been studied before for many kinetically constrained models in homogeneous systems, and different models show some behavior that could also be seen experimentally.

But physical systems are not always homogeneous (e.g. a granular material with two grain types). In these cases, it could be more appropriate to take a dynamics in random environment. The first paper deals with a couple of models that have mixed constraints (e.g. FAjf with different $j$ for different sites). In the second paper the random environment that we took was the polluted lattice, that's already been analyzed in a closely related model called bootstrap percolation.

Many of the tools used in order to understand kinetically constrained models in homogeneous environments stop working when the system is not homogeneous (e.g. relaxation time/spectral gap). The reason is that in random systems remote regions that are not really typical influence the quantities that we study, even though they shouldn't effect the actual dynamics. We needed to find a way to analyze time scales of the model in a way which ignores these untypical regions. We managed to do that by looking at the hitting time of events that depend only on the state near the origin (or any arbitrary vertex).

For example, in the FA2f on polluted lattice, the relaxation time is infinite but the time to empty the origin is typically finite (and we can even say how it behaves at large densities). For the FA1f and FA2f mixture, the emptying time behaves like a power of $1-\text{density}$ (relaxation time is exponential). In this case, I also prove that this power is random, i.e., it changes from one realization of the noise to the other.

Field theoretical representation of the Loop-erased random walk

This project was inspired by an observation in the physics literature. They noticed a relation between the loop erased random walk and a $\phi^4$ theory with $n=-2$ components.
In the first paper we found two possible spin systems with observables that can be related to loop erased random walks. Both of them have two bosons and four fermions, which gives the total $n=-2$.
The hope is that this could be the starting point of a rigorous renormalization procedure, that could give information on the scaling limit of the loop erased random walk, like the calculations of the Hausdorff dimension with non-rigorous field theory techniques.

The second paper shows how to reduce these spin systems to the standard $\phi^4$ theory, and also presents the argument of the first paper in a more approachable way for physicists.

The picture to the right is of a loop erased random walk on the honeycomb lattice with the erased loops in pink. The video shows how it's constructed (backtracking steps are suppressed for the animation to look nicer). Simulation by Kay Wiese.

Spectral gap of the Fredrickson-Andersen one spin facilitated model

In this note I proved two small results on the Fredrickson-Andersen one spin facilitated model in the stationary state. The first is an answer to an open question form 2008 in this paper, about the relaxation time in dimensions 3 and above. The second is about the relaxation time in finite graphs.
In the arXiv version there's one more theorem, on the persistence function of the model in $\mathbb{Z}^d$.

Metastable behavior of bootstrap percolation on Galton-Watson trees

In this work, I tried to understand how the bootstrap percolation on Galton-Watson trees behaves near its phase transition. I analyzed the $r$-children bootstrap percolation – each vertex of the tree is either infected or healthy, and at each step healthy vertices become infected if they have at least r infected children. In the animation healthy vertices (white) become infected (black) if they have at least two infected children. This is a deterministic dynamics in discrete time, and the randomness comes from the tree (which is random), and also from the initial configuration. In our case, in the beginning sites are infected with probability $q$ independently.
Depending on the offspring distribution of the Galton-Watson tree and on the threshold $r$, a phase transition could occur – up until a certain value of $q$ some of the sites belong to a block of “immune” sites, that could never be infected. But if enough sites are infected in the beginning, all sites will eventually be infected with probability $1$. It's even possible to tell if this probability is continuous of discontinuous at the critical point, and in fact different offspring distributions give different behavior.
I was interested in the time scaling near the critical point. For the continuous case, the question was how fast the probability to stay healthy forever decays as $q$ approaches its critical value. For the discontinuous case, the picture is different – at the critical value, the limiting probability to stay healthy is nonzero, but once we increase $q$ it becomes $0$. If we look more closely, we see that when $q$ is just above criticality the probability to remain healthy after a certain number of time steps has a long plateau, and after a very long time it decays to $0$. This plateau becomes longer and longer as $q$ approaches its criticality, and at the critical point it reaches infinity. What I was trying to understand is how the length of this plateau changes with $q$ for different offspring distributions.
It turns out that there are many possible exponents for the scaling of the plateau with the distance from criticality, unlike the case of a regular tree, in which this exponent is always $1/2$. Another behavior that doesn't exist in regular trees is the appearance of more plateaus, and not just one at the limiting probability at criticality.

Hausdorff dimension of the record set of a fractional Brownian motion

The fractional Brownian motion with Hurst index H is a continuous Gaussian process. It has stationary increments, and it is self-similar with an exponent $H$ – stretching time by some scalar $\alpha$ and space by $\alpha^H$ doesn't change its law. For $H=1/2$ we get the normal Brownian motion, but for other values of $H$ the process is not Markovian.
We were interested in its records, i.e., the times at which the process reaches its previous maximum. For the Brownian motion, the records have the same statistics as the 0s, so they have the same Hausdorff dimension of $1/2$. One way to understand that connection is thinking of the simple random walk (with probability $1/2$ to move up and $1/2$ down), but starting at $-1/2$ rather than $0$. The walk never hits $0$, but we can think of the zero set as the times at which the random walk changes sign. These times form a point process with independent increments. The law of the increments the law of the time it takes for a simple random walk to go from $0$ to $1$ (or $0$ to -$1$, depending if it's an even point or an odd point, but it's the same). This is the exact same process as the records – whenever we break a record, the time for the next record is when we hit the current record $+ 1$.
This reasoning works thanks to the Markov property, and for the fractional Brownian motion it is wrong. We showed in this work that unlike the zero set (that has dimension $1-H$) the record set has dimension H, so they coincide only for the Brownian motion.

Small noise and long time phase diffusion in stochastic limit cycle oscillators

In this work we analyzed the phase shift of an oscillator caused by a random noise. Consider a dynamical system with a stable periodic path (limit cycle), and perturb it with white noise that has a small amplitude $\epsilon$. We then compare the noisy path to the original unperturbed one. In the beginning, since the noise is small, we will not feel it, and both systems move more or less together. As time goes by, the noise starts playing a role. Fluctuations transversal to the cycle will be suppressed very strongly, since the limit cycle is stable. However, in the direction parallel to the limit cycle the system could fluctuate freely. This is the reason we expect to see the perturbed and unperturbed system separate at times of order $\epsilon^{-2}$, which is the timescale of the free diffusion. We showed that they indeed separate in this scale, and found an exact expression describing the (stochastic) dynamics of the difference between both phases.
The picture shows an example of a dynamical system with a stable limit cycle (black) and two realization of a noisy dynamics (blue and red). The simulation is of one period, and we see that some random phase difference is created. The animation shows both the unperturbed dynamics (blue) and the one with noise (red), and we see how the phase difference between the two evolves.

On maximizing the speed of a random walk in fixed environments

Consider a random walk on the discrete segment $[0,N]$ reflected at $0$. The random walk starts at $0$, and ends when it reaches $N$. The points of $[0,N]$ are of two types – the background and the boosts. In the background, the probability to move to the right is $q$ and $1-q$ to the left. At boosts, the probability to move to the right is $p$ and $1-p$ to the left. If we are allowed $k$ boosts, where should we put them in order to minimize the expected hitting time at $N$?
This question was solved here for $\frac{1}{2} = q < p$, and the optimal placing is when the boosts are equally spaced. We extended this result to $\frac{1}{2} < q < p$, showing that the same spacing works.