Avatar for Philip LaPorte

Philip LaPorte

mathematics phd candidate, uc berkeley

I am a fourth-year graduate student at UC Berkeley, co-advised by Steven Evans and Martin Nowak (Harvard). I am interested in the mathematics of social behavior, especially game theory and evolutionary game theory. Recently I have been working on stochastic games and direct reciprocity. I am very grateful to be supported by an NSF Graduate Research Fellowship and a Simons Dissertation Fellowship. As an undergraduate, I studied math at Harvard with interests in geometry and topology.

publications

From simultaneous to leader-follower play in direct reciprocity

(in press at PNAS Nexus)  PL, L Pracher and S Pal

Payoff equivalence and complete strategy spaces of direct reciprocity

(2026) Proceedings of the National Academy of Sciences.  PL, C Hilbe, NE Glynatsi and MA Nowak

A geometric process of evolutionary game dynamics

(2023) Journal of the Royal Society Interface.  PL and MA Nowak

Adaptive dynamics of memory-1 strategies in the repeated donation game

(2023) PLOS Computational Biology.  PL, C Hilbe and MA Nowak

talks and presentations

Upcoming talk: Complete strategy spaces of direct reciprocity

(2026) AMS Spring Eastern Sectional Meeting (Special Session on Mathematical Modeling of Ecological and Evolutionary Dynamics)

Factored strategies in stochastic games

(2025) UC Berkeley Probability Seminar

Strategic complexity in stochastic games

(2025) AMS New England Graduate Student Conference

Cooperation in direct reciprocity

(2024) Fu Lab, Dartmouth Mathematics Dept

How to exploit majority rule - the McKelvey chaos theorem

(2023) Many Cheerful Facts seminar, UC Berkeley

misc

A map $\Delta^n\to \mathbb{R}^m$ which is not affine-linear but preserves straight line segments. These arise when writing the payoff vector of a repeated game as a function of one player’s randomized action at a given history. All other strategy components must be fixed. I first saw this interesting property for 2x2 games in McAvoy and Nowak 2019. It has a neat proof for discounted Markov decision processes, with vector-valued payoffs based on the current state.

[See also Joseph LaPorte.]