I'm a postdoc in Roma Tre University's probability group, interested mostly in mathemitical statistical physics.

I did my PhD at the LPSM with Cristina Toninelli on interacting particle systems, and specifically on kinetically constrained models. This is a family of models studied in physics in order to better understand jamming of glassy and granular materials. You can see my thesis here.

I studied for a B.Sc. in mathematics and physics at the Technion in Haifa (which is also my home town). Other than classes, I also got to work on two research projects, one in probability with Itai Benjamini at the Weizmnn Institute, and the other in experimental physics at Bielefeld university with Armin Gölzhäuser.

After my bachelor's, I moved to Paris and studied for a master's degree in theoretical physics at the ENS. This was a two year program, with two research internships, one for each year. The first year I worked at the LPMA (which turned into LPSM), with Giambattista Giacomin and Cristina Toninelli. Both projects were in mathematical statistical physics, from the mathematical side of it. For my second year's internship I worked with Kay Wiese at the LPTENS, still on statistical physics but from its physics side.

My CV is here.

show more

hide more

The purpose of this project was to try to approach a model with some non-trivial topology from the dynamics point of view. The most famous model with this type of behavior is the XY model in two dimensions. In this model there's a critical temperature, starting from which isolated vortices could appear. Analyzing this model is very difficult already in equilibrium, and adding dynamics on top of that complicates things even further. We've decided to study a one dimensional chain, that still has some interesting topological behavior but is much easier to understand.

So we take a chain of rotors, $X_1,\dots,X_N$, each lives on the cycle $\mathbb{R}/2\pi\mathbb{Z}$. They interact via the XY Hamiltonian, $H=-J\sum_{i=1} \cos(X_{i}-X_{i-1})$, and in order to see interesting topological effects we take periodic boundary conditions ($X_0=X_N$). When the temperature is fixed and the size of the system goes to infinity, it becomes trivial, because fixing one rotor to whatever value we want costs a finite energy, and once we do that the spins to the right no longer depend on the spins to the left. So correlations decay exponentially fast, and we can't expect to see any global behavior. If we do want to see topological effects, we should take the interaction strength $\beta J$ to infinity with $N$.

Imagine that, for some fixed large $N$, the interaction $\beta J$ is so big, that it forces all angles $X_i-X_{i-1}$ to be very small. This way, the rotors look like a continuous field – give each point $t\in \mathbb{R}/\mathbb{Z}$ the value $X_{\lfloor Nt \rfloor}$. When $N$ is large, we can think of this map as (approximately) a continuous function from $\mathbb{R}/\mathbb{Z}$ to $\mathbb{R}/2\pi\mathbb{Z}$. If this is indeed the case, that function will have a winding number. The winding number is an integer, and it doesn't change under continuous deformations of the field, so if the dynamics we take is continuous, the winding number will stay fixed. An interesting aspect of this behavior is that it could keep the system out of equilibrium. The minimal energy is when the winding number is $0$, so when $\beta J$ is large the equilibrium should concentrate there. So if we start with a configuration whose winding number is non-zero the system could never reach equilibrium, which means that the state is a (meta)stable non-equilibrium state. This will remain true until the continuous field approximation breaks down, and only then the winding number would be able to change.

This approximation of a continuous field is based on the assumption that all the angles between neighboring rotors are small. It's actually not difficult to see that the winding number of the discrete chain could only change when two neighboring rotors point at opposite directions. This costs a free energy barrier that has two parts – the energy difference $2J$, and an entropy term $\log N$ coming from the fact that there are $N$ possible pairs of rotors. In the paper we analyzed the Itô process given by $\text{d}X(t)=-\nabla H \text{d}t + (2 \beta)^{-2} \text{d}B(t)$, which is reversible with respect to the XY measure. We proved that for this process the time it takes to change winding number behaves (roughly) like $\exp(2\beta J - \log N)$.

In the picture you can see the winding number as a function of time for a certain realization of the process. The animation shows how the winding number changes (for a different realization). Look for the two rotors that point in opposite directions! (code)

With Anatole Ertul.

show more

hide more

The Kob-Andersen model is an interacting particle system evolving according to the Kawasaki dynamics, i.e., particles live on the vertices of $\mathbb{Z}^d$, and could jump to a neighboring site. In this particular model the particle could jump only to an empty site, and only if the number of empty neighbors it has passes some fixed threshold both before and after the jump. This constraint slows down the system, and the larger the density its affect is stronger. Simulations run by physicists showed that when the density is high enough this effect is so drastic, that they conjectured that there is some critical density, and above that density the model is not diffusive any more (so when looking at distances of order $N$ the relevant time scale is strictly longer than order $N^2$).

Whether or not a model is diffusive depends on the exact observable we're interested in. Each of the three papers in this series treats a different meaning of being diffusive, but the result in all of them is similar — the model is diffusive, and when the density is high the diffusion coefficient decays extremely fast (and in the same manner for all three observables).

In the first paper, we analyzed the relaxation time, which describes how long correlations take to decay in finite volume $N^d$. We showed that it grows like $C N^2$, meaning that it's diffusive, and $1/C$ is the diffusion coefficient.

In the second paper we studied at the path of a single tagged particle when the system is in its stationary state. If we look from very far away, so one lattice distance has length $1/N$, and fast forward the dynamics by a factor $N^2$, the path of the tagged particle looks like a Brownian motion. This is a type of diffusive scaling, and the diffusion coefficient is the one for the Brownian motion.

The third paper talks about the hydrodynamic limit of the model — take the model in finite volume $N^d$, with some initial density profile that's not constant. In the classical sense, hydrodynamic limit means that if we speed up time by $N^2$ the density profile solves (in the limit) a hydrodynamic equation. Then the model is diffusive if this hydrodynamic equation is non-degenerate, that is, the diffusion coefficient is non-zero. This can't happen in the Kob-Andersen model (at least for general initial density), because there's always a possibility for a blocked structure, when no particle has enough empty neighbors and the dynamics is stuck. This effect though is not very physical, since it's very noise intolerant — I prove in the paper that with a very small perturbation (that could be as small as we want) the model converges to a hydrodynamic limit. The dependence of the diffusion coefficient on the magnitude of the perturbation is negligible, so if we take it to $0$ (*after* taking $N$ to infinity) we get a non-degenerate hydrodynamic equation.

show more

hide more

Kinetically constrained models (KCMs) are interacting particle systems whose dynamics is being slowed down by a kinetic constraint. They live on a graph, where each vertex (site) could be either empty or occupied. The state of a site could only be changed when the constraint is satisfied. When this is the case, sites update with rate $1$ — independently of everything, the site forgets its state, and gets a new random state which is empty with probability $q$ and occupied with probability $1-q$.

One example is the Fredrickson-Andersen $j$-spin facilitated model (FAjf) on the the two dimensional lattice, where a site could only be updated if it has at least $j$ empty neighbors. Another example is the North-East model. Here we require both the site above and to the right to be empty for the constraint to be satisfied. In general, the constraints that we choose are easier to satisfy when there are more empty sites.

In this paper I studied KCMs in random environment, obtained by mixing constraints. The first mixture is of FA1f together with FA2f, and the second is a mixture of FA1f with North-East. In both cases we choose the constraint randomly for each vertex in the beginning, and then run the process.

The main difficulty compared to the more classical KCMs, is that this model is not homogeneous — once fixing the constraints, different points on the lattice have different behavior. So in particular, if we look far enough we'll come across some “untypical” regions that are difficult to control. This is the reason that the relaxation time, which usually describes typical time scales of KCMs, is no longer the correct quantity to study. In many models the way to overcome this issue is by coupling methods, but KCMs are not attractive, so this solution doesn't work.

In this paper I show how to use variational principles other than the spectral gap in order to understand how long it takes for some arbitrary site to become empty. This time could be much smaller than the relaxation time. For example, in the FA1f and NE mixture, the relaxation time is infinite but the emptying time of the origin is typically a power of $q$. For the FA1f and FA2f mixture, I show that the emptying time behaves again like a power of $q$ (here the relaxation time is exponential). In this case, I can also prove that this power is random, i.e., it changes from one realization of the noise to the other.

show more

hide more

This project was inspired by an observation in the physics literature, relating the loop erased random walk with a $\phi^4$ theory with $n =-2$ components. We found two possible spin systems whose observables can be related to loop erased random walks. Both of them have two bosons and four fermions, which gives the total $n=-2$. The hope is that this could be the starting point of a rigorous renormalization procedure that could give information on the scaling limit of the loop erased random walk, like the calculations of the Hausdorff dimension that non-rigorous field theory techniques.

show more

hide more

In this work we studied the Fredrickson-Andersen model on the polluted lattice (i.e. when removing some of the vertices at random). This model is a part of a large family of models called kinetically constrained models, that try to explain certain aspects of glasses and granular materials. We wanted to understand how typical time scales diverge when the density is very high. It's a question that's been studied before for many kinetically constrained models in homogeneous systems, and different models show some behavior that could also be seen experimentally.

But physical systems are not always homogeneous (e.g. a granular material with two grain types). In these cases, it could be more appropriate to take a dynamics in random environment. The random environment that we've taken is the polluted lattice, that's already been analyzed in a closely related model called the bootstrap percolation.

Many of the tools used in order to understand kinetically constrained models in homogeneous environments stop working when the system is not homogeneous (e.g. the spectral gap). The reason is that in random systems remote regions that are not really typical influence the quantities that we study, even though they shouldn't effect the actual dynamics. We needed to find a way to analyze time scales of the model in a way which ignores these untypical regions. We managed to do that by looking at the hitting time of events that depend only on the state near the origin (or any arbitrary vertex).

show more

hide more

In this work, I tried to understand how the bootstrap percolation on Galton-Watson trees behaves near its phase transition. I analyzed the $r$-children bootstrap percolation – each vertex of the tree is either infected or healthy, and at each step healthy vertices become infected if they have at least r infected children. In the animation healthy vertices (white) become infected (black) if they have at least two infected children. This is a deterministic dynamics in discrete time, and the randomness comes from the tree (which is random), and also from the initial configuration. In our case, in the beginning sites are infected with probability $q$ independently.

Depending on the offspring distribution of the Galton-Watson tree and on the threshold $r$, a phase transition could occur – up until a certain value of $q$ some of the sites belong to a block of “immune” sites, that could never be infected. But if enough sites are infected in the beginning, all sites will eventually be infected with probability $1$. It's even possible to tell if this probability is continuous of discontinuous at the critical point, and in fact different offspring distributions give different behavior.

I was interested in the time scaling near the critical point. For the continuous case, the question was how fast the probability to stay healthy forever decays as $q$ approaches its critical value. For the discontinuous case, the picture is different – at the critical value, the limiting probability to stay healthy is nonzero, but once we increase $q$ it becomes $0$. If we look more closely, we see that when $q$ is just above criticality the probability to remain healthy after a certain number of time steps has a long plateau, and after a very long time it decays to $0$. This plateau becomes longer and longer as $q$ approaches its criticality, and at the critical point it reaches infinity. What I was trying to understand is how the length of this plateau changes with $q$ for different offspring distributions.

It turns out that there are many possible exponents for the scaling of the plateau with the distance from criticality, unlike the case of a regular tree, in which this exponent is always $1/2$. Another behavior that doesn't exist in regular trees is the appearance of more plateaus, and not just one at the limiting probability at criticality.

show more

hide more

The fractional Brownian motion with Hurst index H is a continuous Gaussian process. It has stationary increments, and it is self-similar with an exponent $H$ – stretching time by some scalar $\alpha$ and space by $\alpha^H$ doesn't change its law. For $H=1/2$ we get the normal Brownian motion, but for other values of $H$ the process is not Markovian.

We were interested in its records, i.e., the times at which the process reaches its previous maximum. For the Brownian motion, the records have the same statistics as the 0s, so they have the same Hausdorff dimension of $1/2$. One way to understand that connection is thinking of the simple random walk (with probability $1/2$ to move up and $1/2$ down), but starting at $-1/2$ rather than $0$. The walk never hits $0$, but we can think of the zero set as the times at which the random walk changes sign. These times form a point process with independent increments. The law of the increments the law of the time it takes for a simple random walk to go from $0$ to $1$ (or $0$ to -$1$, depending if it's an even point or an odd point, but it's the same). This is the exact same process as the records – whenever we break a record, the time for the next record is when we hit the current record $+ 1$.

This reasoning works thanks to the Markov property, and for the fractional Brownian motion it is wrong. We showed in this work that unlike the zero set (that has dimension $1-H$) the record set has dimension H, so they coincide only for the Brownian motion.

show more

hide more

In this work we analyzed the phase shift of an oscillator caused by a random noise. Consider a dynamical system with a stable periodic path (limit cycle), and perturb it with white noise that has a small amplitude $\epsilon$. We then compare the noisy path to the original unperturbed one. In the beginning, since the noise is small, we will not feel it, and both systems move more or less together. As time goes by, the noise starts playing a role. Fluctuations transversal to the cycle will be suppressed very strongly, since the limit cycle is stable. However, in the direction parallel to the limit cycle the system could fluctuate freely. This is the reason we expect to see the perturbed and unperturbed system separate at times of order $\epsilon^{-2}$, which is the timescale of the free diffusion. We showed that they indeed separate in this scale, and found an exact expression describing the (stochastic) dynamics of the difference between both phases.

The picture shows an example of a dynamical system with a stable limit cycle (black) and two realization of a noisy dynamics (blue and red). The simulation is of one period, and we see that some random phase difference is created. The animation shows both the unperturbed dynamics (blue) and the one with noise (red), and we see how the phase difference between the two evolves.

show more

hide more

Consider a random walk on the discrete segment $[0,N]$ reflected at $0$. The random walk starts at $0$, and ends when it reaches $N$. The points of $[0,N]$ are of two types – the background and the boosts. In the background, the probability to move to the right is $q$ and $1-q$ to the left. At boosts, the probability to move to the right is $p$ and $1-p$ to the left. If we are allowed $k$ boosts, where should we put them in order to minimize the expected hitting time at $N$?

This question was solved here for $\frac{1}{2} = q < p$, and the optimal placing is when the boosts are equally spaced. We extended this result to $\frac{1}{2} < q < p$, showing that the same spacing works.