Paying and Persuading
Abstract
We study the joint design of information and transfers when an informed Sender can motivate Receiver by both paying and (Bayesian) persuading. We introduce an augmented concavification method to characterize Sender’s value from jointly designing information and transfers. We use this characterization to show Sender strictly benefits from combining information and payments whenever the actions induced at adjacent kinks in the augmented concavification differ from those in the information-only concavification. When Receiver has an outside option, Sender always first increases the informativeness of their experiment before adjusting transfers—-payments change if and only if full revelation does not meet Receiver’s outside option constraint. Moreover, we show repeated interactions cannot restore ex-post efficiency, even with transfers and arbitrarily patient agents. However, Sender benefits from linking incentives so long as Receiver prefers the stage-game optimal information structure to no information. Our results have implications for platforms such as Uber, where both monetary rewards and strategic information disclosure influence driver behavior.
Keywords: Bayesian Persuasion, Constrained Information Design, Transfers, Platforms.
JEL Codes: D82, D83, D86, L15.
1 Introduction
Consider the problem faced by Uber, who seeks to design a platform such that drivers fulfill ride requests as often as possible (i.e. drivers accept all rides that are suggested to them). Standard economic theory suggests two canonical, potential tools Uber can use to help them attain this goal: either compensate drivers directly with money (Holmström (2017)), or indirectly, with information (Kamenica (2019)). Indeed, a rich literature has studied the nature and limits of each of the two instruments above in isolation. Yet little has been said about how to wield information and transfer together, even though modern platforms often have the ability to do both. For example, while Uber can strategically disclose information about the quality of a specific ride from a driver (and, because the algorithm is set in advance, commit to how this disclosure), they can also simply pay the driver more for accepting a ride222For example, Uber advertises differential tiers for their drivers, and restricts the information lower-tiered drivers can see about their rides: see this explanation.. When both tools are available to Uber, a new host of questions arises: how does Sender optimally trade off information and transfers (and do they even want to use both)? What are the limits of what can be done when Sender can change both drivers’ preferencse and information? How does Sender’s optimal value evolves with the prior belief about the state? Are there qualitative differences between how these tools are used to deliver utility to Receiver in the presence of exogenous outside options?
Motivated by these questions, we embed action and belief-contingent transfers into the canonical finite persuasion model of Kamenica and Gentzkow (2011). Our main result – Theorem 1 – provides a geometric characterization of Sender’s value from persuasion and transfers, which we call the -cavification. To do so, we first show that there exists a unique set of action-contingent transfers, which we call the canonical transfers – which are optimal at every prior and choice of information structure. We then identify a finite set of extremal beliefs – those at which Receiver is maximally indifferent between different actions – which uniquely pin down the value of the -cavification for all possible priors. When Sender faces a moment persuasion problem – as is the case in linear persuasion settings – extremal beliefs partition the space of prior beliefs into finitely many intervals, along which Sender’s decision on how to use information and transfers depends only on the value of their -cavification at each extremal belief. We use this condition to derive easily sufficient conditions on which Sender prefers to use both information and transfers (Proposition 2) and show how to use it to compute the -cavification value for all transfers in polynomial time (Corollary 1).
We next turn to the case where Receiver has an exogenous outside option. Again we show the canonical transfers unique motivate Receiver Receiver in the prescence of a utility promise constraint – any constrained optimal transfer rule is exactly the canonical transfer rule modulo a constant. This characterization, combined with a strong duality result for constrained information design (see Doval and Skreta (2023)) implies our second main result (Theorem 2): so long Sender’s optimal payment is not exactly equal to the canonical transfers, the optimal information structure must be full information. We interpret this as a sequencing result on the role of information and transfers in meeting an outside option constraint for Receiver – Sender always prefers to first use information to guarantee Receiver a certain utility, and only turns to augmenting transfers after this channel is completely exhausted.
Finally, we consider the role of repeated interactions. Contrary to standard intuition from repeated games, we show that even with perfectly observable actions and transfers, repetition need not restore ex-post efficiency (c.f. Levin (2003)). In fact, Sender need not benefit from dynamic interaction (Proposition 4). Despite this, we show that a sufficient condition for Sender to benefit from dynamically linking Receiver incentives is for Receiver to strictly prefer Sender’s static-optimal information structure to no information. This condition that is violated in the canonical two-state, two-action example in Kamenica and Gentzkow (2011) but is satisfied in many richer examples. Our characterization applies even when is very large (so transfers may never be optimal), and thus provides an independent contribution on the value of repetition in models of only information design.
The rest of this paper proceeds as follows. We next discuss the related literature. Section 2 works through a simple numerical example that highlights the intuition behind our results. Section 3 presents the general model, and Section 4 solves the static persuasion and transfers problem. Section 5 adds an exogenous outside option constraint and compares the solution to the unconstrained solution. Section 6 studies the repeated persuasion problem. Section 7 concludes and discusses promising avenues for future work.
Related Literature
We contribute to a long and rich literature on information design, initiated by Rayo and Segal (2010) and Kamenica and Gentzkow (2011) for the case of a single receiver, and by Bergemann and Morris (2016), Taneva (2019), and Smolin and Yamashita (2023) for the many-receiver case. Surveys of the burgeoning literature are found in Bergemann and Morris (2019) and Kamenica (2019); we refer the reader to these papers for a more comprehensive discussion. One core insight underlying this literature is that information provision, even without transfers, can both change behavior but benefit Sender. This insight motivates our analysis, which seeks to understand Sender’s tradeoff between information provision and transfers when both may be used to motivate an agent.
There are some related models on the joint interaction between transfers and information, albeit in different settings. Bergemann et al. (2015) study the role of varying information on a monopolists’ optimal pricing scheme and the implications for the distribution of welfare, focusing on extremal information structures. Bergemann and Pesendorfer (2007), Eso and Szentes (2007), and Bergemann et al. (2022) study revenue-maximizing information disclosure to bidders who then participate in the auction, while Terstiege and Wasser (2022) and Ravid et al. (2022) study auctions where bidders can choose the information they learn about their type (potentially at a cost). We differ from this work by having both Sender design information but also pay the agent (instead of having the agent pay Sender for a good). Finally, Li (2017) study the effect of limiting transfers on information transmission in a binary-state model where the transfer rule is decided after information. We contribute to all of these models by solving a general finite-state, finite action problem with information and transfers, highlighting the role of extremal beliefs in characterizing the optimal solution.
There has also been significant research in the dynamics of (optimal) information design, starting from the seminal work of Ely (2017) and Renault et al. (2017) who analyze the optimal policy when the state evolves according to a Markov chain. Koh and Sanguanmoo (2022) and Koh et al. (2024) analyze general models of persuasion when the state is persistent and the agent takes an irreversible action. Finally, Ely et al. (2022) study a dynamic moral hazard problem where the principal can release information about a persistent fundamental that affects agents’ work-or-shirk decisions and thus the realized path of transfers. We contribute to this literature by giving a framework in which a principal jointly contributes to the joint path of information and transfers for a transient state.
Along the way, we need to analyze the static persuasion problem subject to an exogenous outside option constraint. To do so, we draw on methods introduced by Doval and Skreta (2023) and Treust and Tomala (2019) for constrained persuasion problems without transfers, and augment their argument to allow for Sender to also pay Receiver. Further afield, Babichenko et al. (2021), Kosenko (2023), and Lorecchio and Monte (2023) all study persuasion problems without transfers; we solve the constrained persuasion problem when Sender can also pay the agent.
Finally, we contribute to the literature that seeks to find geometric characterizations for the value of communication. The concavification theorem – a fundamental result in information design – goes as far back as Aumann and Maschler (1995), and was applied to communication games by Kamenica and Gentzkow (2011). More recent work has found a geometric characterization in games with many players (Mathevet et al. (2020)), for the Bayes welfare set of a game (Doval and Smolin (2024)), for cheap talk with state-independent preferences (Lipnowski and Ravid (2020)), and for general finite cheap talk models (Barros (2025)). We contribute to this literature by providing a geometric characterization of Sender’s joint value from using persuasion and transfers that can be computed in polynomial time.
2 Examples
2.1 The Role of Transfers
We start with the following example based on a one-shot interaction between a driver and Uber. Suppose there are two states of the world: a ride is either good or bad . Let be the probability the driver thinks the state is good. Given the driver’s belief about the state of the world, they can do one of three things. Reject the ride , accept the ride but then renege and cancel , or accept and fulfill the ride request . Uber has strict, state-independent preferences: they prefer acceptance to rejection to acceptance and then reneging. The driver wishes to accept good rides and rejects bad ones. However, the driver prefers accepting and then reneging to either accepting a bad ride or rejecting a good one. Formally, we model the above interact with the following state-dependent payoffs.333The specification of cardinal payoffs is not important, and is mostly chosen to simplify the algebra.
S/R | |||
---|---|---|---|
Having fixed cardinal payoffs, we use the concavification theorem of Kamenica and Gentzkow (2011) to graph Sender’s indirect utility function and its concavification in Figure 1.
Suppose . Then Sender’s optimal value from persuasion is given by . If, instead, Sender could not use persuasion but would pay the agent to induce , giving Sender a payoff of .
What happens if Sender could use both transfers and persuasion? Consider the following scheme: with probability , reveal the state is for sure, and pay the agent nothing, allowing them to take . With probability induce belief , and pay the agent dollar to take action (noting their expected payoff from at is ), This gives Sender a payoff of , which is greater than their value from just persuasion or just transfers. Figure 2 gives a graphical illustration of this joint persuasion and transfer scheme.
There are a few features of the above graph worth explicitly flagging. First, the candidate solution strictly outperforms (for sender) the policy where Sender gives full information and then pays to induce the efficient action at each degenerate belief (in which case payments are never used and Sender gets payoffs of .). This computation shows the ex-post efficient payoff is not attained by information and transfers, which is potentially surprising as actions are perfectly observed and Sender has access to transfers in our model. However, efficiency fails because of Sender’s limited liability constraint: Sender cannot charge Receiver when a state-realization guarantees Receiver positive surplus, and hence Sender may choose to restrict the information they provide. Consequently, Sender may prefer to withhold information about the state in order to relax the limited liability constraint and hold on to more of the surplus.
Second, the optimum features both transfers and persuasion, highlighting that both tools can be useful: in particular, a bang-bang solution where either only transfers or only persuasion are used at some prior belief is strictly suboptimal. Finally, in equilibrium, the induced beliefs are exactly those where either (1) the state is fully revealed or (2) Receiver’s best-response correspondence is multiple-valued. Theorem 1 generalizes this observation and characterizes all beliefs that might be induced at an optimal information and transfer policy.
2.2 The Role of Dynamics
Consider again Example 2.1 but suppose the prior is . Here, one candidate optimum is to pay Receiver to take action , which yields a payoff of over both periods. However, this is not optimal444Note that by Theorem 1, this is the static optimum at this prior. In particular, consider the following scheme.
-
(1)
At , recommend Receiver take action .
-
(2)
If Receiver follows this recommendation, pay a lump sum of , and split beliefs into with probabilities and have Receiver best respond myopically.
-
(3)
Else, if Receiver does not follow this recommendation, revert to static optimum.
If Receiver follows the recommendation, then at Sender attains payoff . In the second period, Sender gets payoff with probability and payoff with probability . Thus, Sender’s expected payoff is .
This recommendation policy is incentive compatible for Receiver. This is clearly true at since recommendations are myopically optimal for Receiver. At , Receiver’s expected payoff is . If they do not follow the recommendation, their continuation payoff is . If they follow the recommendation, their expected continuation payoff is , and hence they cannot profitable deviate. Thus, sender benefits from the ability to dynamically design information and transfers in this model.
The ability to design both is crucial. If Sender only had access to transfers then the static optimum would be optimal. If Sender could only persuade, their optimum must be bounded above by the repeated optimal persuasion value, , plus Receiver’s total surplus in the first period under the optimal persuasion value, , which is less than the joint surplus value. Two key mechanisms are at play: first, the ability of Sender to threaten no-information in the future, which relaxes Receiver incentive compatibility constraints, and second, transfers, which help smoothly ensure Receiver IC exactly binds, maximizing Sender potential surplus. Proposition 3 clarifies the extent to which these two forces intertwine to ensure Sender benefits from persuasion and transfers.
3 Model
There are two players, Sender and Receiver . There is a finite, payoff-relevant state of the world , drawn from a full-support prior . Receiver can take one of finitely many actions .
Before Receiver takes their action, Sender can influence Receiver in one of two ways. First, Sender can design Receiver’s informational environment by committing to any Blackwell experiment from states to some messages space, formally555Following the standard belief-based approaches of Kamenica and Gentzkow (2011) and Bergemann and Morris (2016), noting that we omit the exact details, which are spelled out in the above references. a distribution of posterior beliefs such that . Second, Sender can designs a transfer scheme, formally a mapping from states, messages, and actions into nonnegative real numbers. Let be the set of all transfer rules. Implicitly, the requirement that the transfer cannot be negative imposes a limited liability constraint: Sender cannot ask the agent to pay them conditional on a good state realization of the world. This constraint prevents Sender from immediately attaining their first best and is a core friction in the paper.
Sender and Receiver have baseline preferences and over the action and state, respectively. Given any realized posterior belief , transfer rule , and Receiver action , Sender and Receiver realized payoffs are given by
respectively. is a parameter reflecting the efficiency of transfers. If , then it costs Sender unit to transfer to the agent unit of surplus666In principle, may not be because of differences in marginal tax rates, risk aversion, minimum wage laws, etc. We see as a reduced form way to model these potential frictions..
Given the above realized utilites, for any posterior belief and transfer rule , define Receiver’s best-response correspondence to be
Denote Sender’s optimal selection from Receiver’s best response correspondence is
We will use to denote the baseline without transfers, i.e. there are no payments regardless of the action, state, or belief. Consequently, is Receiver’s action given belief in the absence of transfers, i.e. as it is defined in Kamenica and Gentzkow (2011). Since Receiver’s payoff depends only on their expected state, it should be clear that it is without loss of generality to suppose the transfer rule does not condition on the state (in particular, at each belief , we can pay Receiver their expected transfer). Consequently, we will drop the dependence of on the state in the remainder of the paper to simplify notation. From here, define Sender’s indirect value function given a transfer rule to be
noting the choice of from implies is upper semi-continuous in for any fixed transfer rule . A tuple is optimal at prior if it solves the program
Let be the value function that is attained by an optimal tuple at a prior .
Recall by the revelation principle of Kamenica and Gentzkow (2011) that it is sufficient for to support at most many actions; without loss of generality, we will restrict to these straightforward signals for the remainder of the exposition.
4 Static Equilibria
4.1 -Concavification
What is the value of persuasion and transfers? Before we explicitly characterize the optimum, it will be useful to restrict the set of transfer rules that are part of an optimal tuple. Define Sender’s favorite actions at a belief to be
This is the set of actions where Sender gets the highest possible utility, assuming they choose payments so that Receiver is exactly indifferent between their default action without payments and action . Given this set, define the canonical transfers at a belief as
The canonical transfers are the cheapest way to induce when the agent would prefer to take some action when they do not have access to transfers; in particular, they pay only if Sender takes an action and not otherwise777Because Sender and Receiver are both indifferent after the transfer procedure, any selection from the correspondence is payoff equivalent for both players, and is generically single-valued. When it is without loss of ambiguity, we will treat as a singleton. Formally, we have the following:
Lemma 1.
For any prior and any optimal , the tuple is optimal as well.
The proof is given in PROOF OF LEMMA 1 Note one useful property of the canonical transfers is that they are chosen to leave Receiver’s payoff the same both before and after payments:
We can now use the canonical transfers to define the following function.
Definition 1.
The transfer-augmented indirect value function is given by
The structure of the augmented transfer function sheds some insight into the effect of transfers. In particular, we can rewrite the problem as
The first term is a combination of Sender and Receiver payoffs, and is increasing in , the cost of transfers to Sender: the more costly it is to change Receiver actions, the more Sender wants to behave “as-if” they are acting in Receiver’s best interest. The second term is a lump sump, independent of the induced action, that models Receiver’s “informational outside option” (their payoff from taking an action when they are not paid). We can now prove the following result about the geometric value of persuasion. Recall that the concavification of a function from some topological vector space into , denoted , is the smallest concave function such that at every .
Proposition 1 (Pointwise Maximization).
The optimal value function from persuasion concavifies the transfer-augmented indirect value function: .
The proof is deferred until PROOF OF THEOREM 1 A-priori, the joint choice of optimal information structure and transfer scheme might seem hard to compute, since changes in the information structure may cause Sender to want to induce different actions with different payments at different beliefs. Proposition 1 implies this is unnecessary: it is sufficient for Sender to first maximize belief-by-belief their payoff supposing they only had access to transfers at that belief (which yields ), and then subsequently optimizing over the resulting pointwise maximized function.
Though Proposition 1 greatly simplifies the problem of finding the optimal value of persuasion and transfers, it still requires an uncountable number of individual computations (one for each belief). Note, however, that in finite persuasion models the optimal value of persuasion is piecewise linear; thus, to compute the value of persuasion, it is sufficient to find each kink of the concavification and then linearly interpolate the intermediate values. Theorem 1 exactly finds a set of extremal beliefs which are sufficient to capture all of the kinks in and hence trace out the entire concavification (and hence, the entire value of persuasion and transfers). Towards defining extremal beliefs, define the sets
to be the sets of beliefs at which an action is optimal for an agent without transfers. Clearly, . In the proof of Theorem 1, we show each is a convex, compact polytope contained in , using arguments from Gao and Luo (2025). We have the following definition.
Definition 2.
A belief is extremal if there exists such that .
We give an example of extremal beliefs in a three-state, three-action problem below with the extremal beliefs depicted in brown.
Definition 3.
Fix a set . The -concavification of a function , denoted , is the smallest concave function such that for all .
If , then the concavification is exactly the -cavification. In general, for arbitrary , we only have . The main mathematical content of Theorem 1 shows, for (the finite set of) extremal beliefs, the inequality holds with equality at all beliefs.
Theorem 1 (Finite -cavification).
There exists a finite set of extremal points such that for every belief .
We defer the (somewhat technical) proof to PROOF OF THEOREM 1 . Theorem 1, however, has several useful implications. First, since is independent of the effectiveness of transfers, , it implies that regardless of the effectiveness of transfers, there is a finite set of beliefs (independent of ) which are exactly the ones that might be induced in an equilibrium at some prior belief. As grows large (i.e. ), it becomes very costly to use transfers, so this procedure recovers exactly the value of persuasion. This implies a “qualitative invariance” property of adding transfers to the information design problem: it will never change the set of beliefs that might be supported in an optimal experiment, regardless of the prior. When , Receiver preferences become “effectively” aligned with Sender preferences, and full information is optimal.
Second, it gives a fast way to compute the (finite) set of beliefs that an analyst needs to compute to characterize the value of persuasion, since the set of extremal beliefs is often exactly the beliefs at which Receiver’s best response correspondence is multiple-valued. For example, let’s return to the problem in Example 2.1; here, it is easy to see the extremal points are exactly . Applying Theorem 1 implies the proposed information and payment scheme – splitting beliefs into and and paying at to induce – is exactly optimal for Sender. Moreover, this splitting of beliefs (with different weights to satisfy Bayes plausibility) is optimal for any prior . In general, so long as Receiver’s preferences about the state are unidimensional, there exists a polynomial-time algorithm that delivers the value of persuasion and transfers. We elaborate on this more below in 4.2.
What if instead in Example 2.1? Reading off Figure 2 implies the value of persuasion and transfers is then given by , so the value of persuasion and transfers is . However, at , Receiver has value from taking action , so Sender can also simply pay receiver and offer no information, which also yields a value of . Thus, while splitting beliefs into and (and paying for action at ) is optimal, it is not uniquely optimal – and in fact Sender does not benefit from the ability to design the information structure at all. This motivates the following question: when is it that Sender benefits strictly from the combination of persuasion and transfers, relative to the baseline of just transfers or just persuasion? We turn to this question next.
4.2 Benefiting From Transfers
To give a complete answer to this question, we will need some control over the convex structure of set of extremal beliefs. When there are many states, this can in general be a complicated problem, since the exact combinatorial structure underpinning the space of extremal beliefs can be quite subtle888See, for example, recent work by Kleiner et al. (2024).. Consequently, to answer the question of when Sender benefits from persuasion and transfers, we focus on the moment persuasion case, where Sender cares about the state only through a one-dimensional summary statistic. Formally,
Definition 4.
Sender faces a moment persuasion problem if there exists a continuous function and a value function such that .
Moment persuasion is satisfied whenever Receiver’s payoff depends only on a moment of the state (i.e. linear persuasion), in all problems with only two states but potentially arbitrarily many actions (i.e. Example 2.1), and problems where Receiver’s best response is linear in the state (i.e. quadratic loss preferences). Thus, it nests many of the standard settings studied by the literature. Note that if a moment persuasion problem exists for , then it exists for all transfers, as transfers enter both Sender and Receiver payoffs linearly. We will often abuse notation and use and to refer to and , respectively. Our first result under the moment persuasion condition formalizes the discussion succeeding Theorem 1 that restricting to “extremal” points makes computing the joint value of transfers and persuasion fast.
Corollary 1.
Suppose Sender faces a moment persuasion problem. Then there exists an algorithm in which computes the value of persuasion and transfers.
The idea is to use the fact that the indifference points in moment persuasion are pinned down by indifference points, and hence we need only solve a finite system of linear equalities to characterize , after which finitely many comparisons (scaling linearly in the number of actions) is sufficient to compute the optimal induced action. The formal proof is found in PROOF OF COROLLARY 1 We now turn to the main question of this section.
Definition 5.
Say Sender strictly benefits from information and transfers at if
Sender benefits from only transfers if .
Proposition 2.
Suppose Sender faces a moment persuasion problem. Let . Represent , , so that if and only if . For generic preferences and priors ,
-
(1)
If or , Sender benefits from transfers.
-
(2)
If and and , Sender strictly benefits strictly from persuasion and transfers.
-
(3)
If and , Sender does not benefit from transfers (though may benefit from only persuasion).
The proof can be found in PROOF OF PROPOSITION 2 The set is the set of all extremal beliefs which are supported in some optimal information policy across all possible priors: these are exactly the “kinks” in the -cavification function. The genericity condition only has bite for condition (2), where it implies that the correspondence is single-valued at prior . Finally, the assumption in Condition (2) ensures the answer to our question is nontrivial: if this was the case, persuasion would be unnecessary as the same action would always be induced at optimum.
Proposition 2 dovetails nicely with Example 2.1. In particular, it gives a justification for why only transfers are useful on the interval (since the same action is induced at both beliefs and ) and also justifies why the optimal solution requires both information and transfers on the interval . However, the conditions are not exhaustive: it may be possible that Sender benefits from transfers and persuasion even if only the hypotheses of Condition (1) are satisfied (for example, if Sender prefers to pay at the “lower” endpoint on the value function but not the higher one, but the higher action would be induced at the prior under only transfers).
Proposition 2 also implies there is “essentially” an interval structure behind persuasion and transfers. In particular, prior beliefs can be partitioned into finitely many connected sets where (modulo finitely many points), Sender’s decision to either use transfers or use persuasion (and the transfers and beliefs induced) will be constant on each connected set. In the context of the motivating rideshare example, Proposition 2 implies that Uber’s optimal information and transfer policy (1) divides drivers into distinct intervals based on their prior belief about the quality of the ride (i.e. the location of the ride, the driver’s active time, etc.), and (2) within each interval, pursues the same policy by adopting the same information structure (modulo weights to satisfy Bayes plausibility).
5 Exogenous Outside Options
In 4, we have supposed Sender is completely unconstrained in their joint information and transfer scheme. Yet this is often not the case for platforms who can jointly vary both tools; for example, Uber drivers often have outside options, dictated, for example, by their labor-leisure tradeoff or prevailing minimum wage laws. In this section, we adapt our baseline analysis to analyze the problem where Receiver has an exogenous outside option of , and suppose Sender must fulfill this value. In particular, we interpret our results as shedding light on the relative distortions between information and transfers.
Definition 6.
Say is the value of the constrained problem with utility promise if it solves the program
Any tuple that solves this program is -constrained optimal.
The first constraint is the standard martingale constraint in persuasion; the second is the novel utility promise constraint. What do -constrained optimal solutions look like? For a fixed utility promise , we might expect the solution will differ from an optimal solution where in two ways. First, it may affect the structure of transfers paid to the agent, since Sender may want to depart from the canonical transfers at different actions in order to fulfill the utility promise. Second, it may affect the structure of information, as Sender wishes to alleviate the utility promise by increasing Receiver utility through transfers.
Theorem 2.
For any , there exists a -constrained optimal solution of the form for a constant . Moreover, if , then the optimal information policy is full information: supports only degenerate beliefs.
Theorem (2) has two important implications. First, it implies that it is without loss of generality to consider the canonical transfers plus a (potentially zero) lump sum payment . Thus, a utility promise constraint never causes Sender to change the qualitative way in which they employ transfers to motivate Receiver actions. Second, it implies there is a sequential nature to how Sender balances their informational and monetary tools in fulfilling their utility promise constraint: they will always first offer Receiver more information before they begin to alter the amount Receiver is paid.
The proof of Theorem (1) proceeds in a few parts. First, we show all optimal transfer rules take the form for some . The proof is deferred to PROOF OF LEMMA 2
Lemma 2.
Let be -constrained optimal. Then there exists a transfer rule for some such that is -constrained optimal.
Because the canonical transfers render Receiver completely indifferent to taking their default action and the action induced on path, , we know Receiver’s utility under the information and transfer policy is given by
That is, less a constant, Receiver only benefits from a utility promise based on the effect of increasing information transmission, not changes in the aggregate payment scheme. We will use this observation above in simplifying the utility promise constraint below.
Suppose strong-duality holds for the constrained problem (this is verified explicitly in Step 1 of the proof of Theorem 2 in PROOF OF THEOREM 2 ) If this is the case, then by arguments in Doval and Skreta (2023) and Lemma 2, we can rewrite Sender’s objective function as
Supposing a first order approach is valid (e.g. ) and differentiating in implies is necessary for an optimal solution. Plugging this back into the formula for the objective and simplifying some terms implies the objective is given by
The inner concavification is the maximum of linear functions and hence concave in and hence full disclosure must be optimal by Kamenica and Gentzkow (2011)’s concavification theorem. This completes our heuristic proof outline. Details are fleshed out in PROOF OF THEOREM 2
Theorem 2 has a few useful corollaries about properties of the constrained optimum:
Corollary 2.
The following are true about -constrained optimal solutions and the value function :
-
(1)
(Lump-Sum Monotonicity) The lump sum for -constrained optima is increasing in , strictly if .
-
(2)
(Concavity) is concave in and .
The proof is in PROOF OF COROLLARY 2 Part (1) further sharpens the intuition lump-sum payments are used to “top-off” Receiver utility up to a point information can feasibly deliver the remaining surplus. Part (2) is useful when , since it restricts the effect domain of beliefs on which we need to search to find an optimal policy. Part (3) implies Sender prefers not to spread out utility promises whenever possible; this is particularly useful in the context of the dynamic problem, which we analyze next.
We can interpret Theorem 2 in the context of the motivating example as follows. Suppose drivers’ outside options from being on the platform suddenly increase, for example, because of minimum wage laws, changes in purchasing power parity, ease of regulation among taxi medallions, etc. If this occurs, our model predicts first an increases in match efficiency for drivers – they get more information – followed by not net change in efficiency once the outside option is sufficiently high. While stylized (in particular, we don’t consider the effect of wages on platform demand for drivers), we think that this highlights an important novel channel through which standard minimum wage analysis differs when the employer partially compensates agents through information.
6 Dynamics
6.1 The Dynamic Model
Time is discrete and indexed by . In each period, Sender and Receiver play the static game described in 3, with the state drawn i.i.d.
At the beginning of each period, players observe the past history of play consisting of past state realizations, signals, and actions . They then simultaneously choose strategies in the stage game999This implies the state is perfectly revealed at the end of each period, as is Receiver’s action. In the context of the motivating example, we interpret this as saying the driver sees the true value of the ride after making their acceptance/rejection decision. Let be the set of all time- histories and be the set of all histories. Strategies are then functions and . We use subscripts in to separate between the choice of experiment and transfer function.
Each pair of strategies induces a probability measure over the set of histories. We say Receiver’s strategy is obedient if there are no profitable one-shot deviations at any on-path history.
A strategy tuple is optimal for Sender if it solves
We are interested in studying the qualitative properties of optimal tuples. It will be useful to cast the problem recursively.
The first constraint is the Bayes plausibility (martingale) constraint; the second is Receiver incentive compatibility, and the final one is the dynamic promise keeping constraint. If a tuple of functions solve the above problem, we will refer to the induced value function (starting from ) to be the value of the -repeated persuasion problem. Note Sender’s value from dynamic persuasion and transfers is exactly , since they start out in the first period at a trivial history without a utility promise constraint.
6.2 Equilibrium Analysis
When might it be that Sender benefits from the ability to intertwine Receiver incentives across time? Fix some static optimum and recall that repeating the static optimum gives Sender a normalized discounted repeated payoff of for every discount rate . However, suppose there was some alternative information structure which gave Sender a payoff (close) to the payoff at but which Receiver preferred to by a lot. Sender could then leverage commitment to in the future to extract more payoffs from Receiver today at a smaller cost tomorrow, increasing Sender payoffs. In Proposition 3 below, we establish surprisingly general conditions under which we could expect such an intuition to be formalized (in particular, for such “nearby” information and transfer schemes to exist).
Definition 7.
Say Sender benefits from dynamics at prior and discount rate such that .
From here, define
to be Sender’s best possible and Receiver’s worst possible payoff at a prior for some information structure, respectively. If , we say Sender does not attain first best at prior . If , where is a static optimal policy, we say Receiver values persuasion. We can now characterize when Receiver benefits from dynamics; the proof of Proposition 3 is deferred until PROOF OF PROPOSITION 3
Proposition 3.
Suppose Sender does not attain first best and Receiver values persuasion at . Then there exists such that for all , Sender benefits from dynamics. Moreover,
Compare Proposition 3 to Example 2.2: in the candidate optimum there, even though Receiver values persuasion, Sender must still transfer some surplus in order to meet Receiver incentive compatibility constraints conditional on a signal realization. Note the mechanism behind the proof of Proposition 3 is distinct from the asymptotic review policies constructed in PROOF OF PROPOSITION 3 ; this is because of the finite time period, where we use transfers to “smooth out” surplus instead.
One naive intuition for the driving result behind 3 is that the combination of repetition with transfers and a patient Receiver implies the asymptotic information and transfer policy must attain payoffs on the ex-post efficient frontier (i.e. a similar intuition to Levin (2003)). However, this is not the core economic force driving our result – even though there are transfers between one state and another, the limited liability constraint may still bind – thus, if Receiver does not value persuasion at , then it may be impossible to reach the efficient frontier. Below, we give an example highlighting this intuition that doubles as an example demonstrating why Proposition 3 requires the assumption Receiver values persuasion.
Consider the following variation of the standard judge-jury example in Kamenica and Gentzkow (2011), where , , and is the prior probability . Suppose payoffs are as follows:
S/R | ||
---|---|---|
(0, 1) | (1, 0) | |
(0, 0) | (1, 1) |
Suppose , so the optimal joint information and transfer policy is to induce beliefs and never pay Receiver, netting Sender a payoff of and Receiver a payoff of . However efficient allocation here is full information, which guarantees Sender and Receiver a joint payoff of ( for Sender and for Receiver).
Proposition 4.
In the game above, at , for all .
Why does Sender not benefit from dynamics here? Note first that if , then Sender never wants to use transfers because compensating Receiver is more expensive than the benefit Sender can get (since Sender’s payoff from increasing the probability that is chosen is ). Moreover, dynamic information can never be helpful because, conditional on the state , Receiver and Sender actions are zero-sum. Since the static optimum already always induces whenever , any further increase in the probability of being chosen would have to be when , and hence Sender can never relax Receiver’s incentive constraints today by increasing tomorrow’s utility promises without decreasing their own expected utility by the same amount, making a strict improvement impossible.
Unfortunately, finding tight necessary and sufficient conditions remains elusive. In particular, this is because the degree to which transfers can ameliorate or extract surplus from Receiver when they take an action that is better than their no-information action will depend crucially on the cardinal structure of payoffs when Receiver does not value persuasion at . Finding jointly necessary and sufficient conditions is a promising area for future research.
7 Discussion
In this paper, we analyzed a general model of information design where Sender can also commit to action and belief-contingent transfers. We first showed that transfers do not restore ex-post efficiency in the canonical finite persuasion model. Next, we introduced a geometric argument that characterized Sender’s value from having access to both tools. We then used this characterization to give conditions under which Sender benefits strictly from having access to both informational and monetary tools in terms of their optimal choices at extremal beliefs. From there, we showed that when Receiver had an exogenous (nonzero) outside option, Sender’s optimal action responded in a sequential way: first by providing more information, and then only after all informational tools had been exhausted would they augment their transfer function. Finally, we gave conditions under which Sender could benefit from intertemporal commitment, and interpreted our results in the context of a rideshare platform (our motivating example).
We think that there are several promising directions for future research that our model and results can speak to. Below, we sketch out several of the most promising directions.
Nonlinear Transfers.
Neither Theorem 1 nor Proposition 2 use the linear structure of transfers. Consequently, it may be natural to consider more general settings where Sender takes an action that affects both their own payoff and the payoff of Receiver in a nonlinear way. Understanding whether our extremal characterization generalizes, and if not, what an appropriate geometric characterization would be can help us better understand the core economic forces behind persuasion problems where the state can also engage in repression (Gitmez and Sonin (2023)), persuasion with costly information acquisition (Matysková and Montes (2023)), or persuasion when there is a hold-up problem by Sender (i.e. an information design interpretation of Dworczak and Muir (2025)).
Explicit Dynamics.
Proposition 3 does not explicitly characterize the qualitative structure of the optimal contract. However, given our characterization of the static constrained problem (2) and the fact Bayesian beliefs must be a martingale intertemporally, a natural conjecture would be that the optimal contract asymptotically converges to either (1) the repeated static optimum, or (2) full-information and a transfer that services some utility promise constraint. Moreover, which of these two regimes behavior converges too may depend on whether early signal realizations give Receiver high or low payoff, mirroring the history-dependence of Guo and Hörner (2020). Finally, this conjecture mirrors and would relate to known results in dynamic moral hazard where utilities promises must converge (i.e. Thomas and Worrall (1990)), where (with transfers) Sender may want to explicitly retire Receiver by giving up on leveraging dynamic incentives (i.e. Sannikov (2008)).
Platform Design: Thickness versus Efficiency.
Finally, towards analyzing the platform problem in the motivating example more concretely, we could endogenize consumers and suppose the platform (i.e. Sender) can charge consumers a price to elicit services. Consumers demand different ride attributes and hence different pricing schemes lead to different distributions of ride quality, which in equilibrium would be the prior drivers’ hold about the state of the world (this “endogenous prior” setting is reminiscent of and a potential microfoundation for the setting in Dai and Koh (2024)). Sender now faces an additional tradeoff to the one analyzed in this paper101010A similar tradeoff is studied by Gao (2024): an efficiency benefit, (changing the distribution of consumers by varying consumer prices to induce a more favorable prior belief to drivers) versus a thickness cost (decreasing the total mass of drivers in order to hit the desired prior). This would allow us to speak to some of the recent questions about the role of information and pricing and how they affect platforms more generally (see, for example, Bergemann et al. (2025) and Bergemann and Bonatti (2024)).
Appendix A: Omitted Proofs
PROOF OF LEMMA 1
Proof.
Fix an optimal , and suppose is induced at some belief inducing . First suppose . Since Receiver takes action , this implies
Taking differences, this tells us that
Since maximizes Receiver’s payoff without transfers, we have that for any (so that )
and so in particular under payments Receiver will not want to take any . Suppose now that . Then if at belief Sender paid for some , they could attain a strictly higher payoff then the induced pair under at , a contradiction to the optimality of the original tuple. This finishes the proof. ∎
PROOF OF PROPOSITION 1
Proof.
Fix any prior belief . We have the following chain of (in)equalities.
The first equality follows from the definition and maximization first over then ; the second from the fact that moving the maximum into the expectation must make the value weakly greater, the third from Lemma 1 since exactly implements the canonical transfers, and the final one from the standard concavification theorem of Kamenica and Gentzkow (2011). ∎
PROOF OF THEOREM 1
Proof.
That each is compact and closed follows immediately from Lemma A.1 of Gao and Luo (2025); that is a polytope follows by noting
. where is any measure (not necessarily a probability measure). This is a finite intersection of half-spaces and thus each has finitely many extreme points by Theorem 19.1 of Rockafellar (1996). For any fixed , we can now write the transfer-augmented indirect value function on as
which is the maximum of linear functions over a finite index. This implies is convex over the interior of each . Moreover, is globally upper semi-continuous over all of since it is the finite upper envelope of continuous functions. Thus, we have that for any , with strict inequality only potentially possible on the boundaries of .
Now set be the set of extremal beliefs; this is finite. By definition, it must be that is a concave function such that for all , which itself is piecewise convex and globally upper semi-continuous (with jumps at most on points in ). But then because on the boundary of each and affine on the interior (by the definition of the concavification), it must be that for each belief . Since we know also for all (by Proposition 1) it must be that . This finishes the proof. ∎
PROOF OF COROLLARY 1
Proof.
Since Sender faces a moment persuasion problem, it is without loss of generality to parametricize Receiver’s payoff by the same moment function (from Sender’s point of view), in particular by taking equivalence classes of defined by . Second, by the argument in Proposition (3) of Gao and Luo (2025), every extremal belief is either an indifference belief (i.e. is multiple valued) or degenerate belief. Thus, since Sender faces a moment persuasion problem, it is sufficient to find one solution for each pair (a, a’) to the equation
where we abuse notation to mean the transformed problem where we take equivalence classes of beliefs in . Because is a one-dimensional summary statistic, this equation admits a unique solution which can be computed in polynomial time (recalling and are both finite). For each pair , this is a simple linear equation with finitely many states and hence can be solved in polynomial time. Varying over all doubletons of gives there are at most possible pairs of beliefs , and captures a superset of all extremal beliefs. For each of these extremal beliefs, we need to find the (finite) maximum value , but this involves comparing finitely many linear equations and thus can be executed in polynomial time. Finally, this gives on a superset of , and so the concavification is the linear interpolation of the highest points. Concatenating each of the finitely many (polynomial time) steps together implies the result. ∎
PROOF OF PROPOSITION 2
Proof.
Clearly, , so (with equality only if transfers are not useful). Throughout, we will use the fact that for any connected set , is connected, so in particular the convex hulls of extremal points are mapped to intervals in the moment persuasion problem. We can thus find extremal beliefs and such that such that for some and .
Suppose the first condition holds, and we have picked a generic prior such that . Because the induced action under transfers is distinct from the induced action without, we know that either or . Moreover, for every extremal belief where , . Combining these observations with the decomposition of into a convex combination of extremal beliefs (and linearly interpolating its values) then implies the argument. Note that if , a similar argument applies by looking only at the extremal point and the induced action at that point.
Now suppose Condition (2) holds. We will pick a generic where is on the interior of some ; in particular, this implies that we can write for . Part (1) already shows Sender benefits from transfers; we want to show they also benefit from persuasion, i.e. it is not optimal to induce the no-information experiment. We now have the following computation.
Suppose not, so that at , is induced and . Since is on the interior of some ,
is linear on a neighborhood . But because attains the optimal concavification value, one of two things must be true:
-
(1)
There exists such that , or
-
(2)
is linear over interval containing .
Case (1) occurs when the slope of the mapping above is not equivalent to the value , i.e. the slope of the value of the -cavification over the interval ; this is clearly impossible by definition of . Next, we show Case (2) is impossible too. Since and are induced at and and are distinct, it must be that either or . But then this implies that
the last equality uses the fact is linear on by Condition (2). This implies , a contradiction to our hypothesis. Thus, at , it cannot be that , so Sender benefits from persuasion and transfers.
Finally, suppose condition (3) holds. This implies that at the extremal points . Taking the decomposition of the prior , we then have that
The first inequality is definitional, the second by choice of and , the third from the hypotheses of Part (3), and the last one again by the definition of concavification. Thus, the -cavification and concavification coincide, so Sender does not benefit from transfers. ∎
PROOF OF LEMMA 2
Proof.
Suppose is -optimal. For every and , let be the difference in payoffs between the -optimal transfers and the canonical transfers. Each must be nonnegative by the definition of and the fact is chosen on path. Define , and consider the tuple . Clearly, the transfer scheme satisfies the constraints since the expected payment to the principal is the same, but also must induce the same actins on path as and induces the same actions as . Thus, is -constrained optimal. ∎
PROOF OF THEOREM 2
Proof.
We split the proof into two distinct steps to supplement the discussion in the main exposition. Step (1) validates the program written down in the discussion, and Step (2) solves it to prove Theorem 2.
STEP 1: VALIDATING THE LAGRANGIAN APPROACH
For completeness, we introduce some notation from Doval and Skreta (2023). Fix a prior and a utility promise . Let
be the set of feasible beliefs at which there exists an information policy that satisfies the utility promise constraint supposing that there is a lump-sum payment of . Let be the set of experiments where the utility promise holds strictly at that belief, noting . Sender’s objective function, , is upper semi-continuous by assumption and Receiver’s objective function at (which defines our constraint), is the maximum of linear functions and hence continuous. Moreover, both are finite valued. Hence we satisfy all of the necessary assumptions for Theorem (3.1) in Doval and Skreta (2023): restated in our notation, this says exactly that
Theorem (Doval and Skreta, 2023).
For any ,
where the concavification is taken in both the prior and utility promise.
This program yields a value of whenever . The outer supremum over in our program follows from the fact is endogenously chosen by Sender via transfers. Since for any , there exists some at which , for all .
We now want to apply Theorem (3.3) in Doval and Skreta (2023), noting that if then (a nontriviality assumption required by the theorem).
Theorem (Doval and Skreta, 2023).
For every and every ,
What if ? That is, there is no information policy which can guarantee a payoff of at least under the canonical transfers. In that case, the first part of our result implies that the value of the constraint is at . Setting it equal to this value and taking the supremum across all then implies the desired equality.
Suppose now we have fixed some ; substituting in the definition of then implies that the function inside the concavification is given by
Maximizing over and infimizing over then implies the desired program.
STEP 2: SOLVING THE LAGRANGIAN
To solve the program, note that the value function coincides with the following program at the prior and utility promise tuple :
where we pull the constant term out of the concavification. This objective is differentiable in , and hence if the optimal is every nonzero, then the first order necessary condition must hold: that is, . Plugging this back into the above program implies the value is equivalent to solving (less the constant term )
This is the concavification of a (strictly) convex function in for any and hence is attained by full-information. Otherwise, if , then , and we apply the Doval and Skreta (2023) theorems to the problem without supremizing over . ∎
PROOF OF COROLLARY 2
Proof.
The first part. Fix so that and fix some , and suppose . By Theorem 2, this implies is full-information;
where the first comes from the fact , the second from Blackwell’s theorem (since is full information and hence Blackwell-maximal among all posterior distributions), the third from the fact is -constrained maximum, and the final one by the fact . This then implies that . This implies both parts of the argument once we recall that always.
The second part. Clearly for fixed we are concave in , since the second representation in the proof of Theorem 2 implies
The outer supremum is independent of the concavification (and moreover is independent of beliefs) (in particular, will be concave in for any ), and the inner infimum is the infimum of functions each of which is concave in (and hence concave). This implies is concave in .
For , there are two cases. First, if we are on the region where , then the value of persuasion and transfers in exactly is linear with slope (since increases in the utility promise must be matched with increases in payments one-to-one). Second, if we are on the interior of the region of promises where , then
which is the infimum of functions which are linear in and hence must be concave.
Finally, global concavity. Let be the unique point at which but for all . For a small enough compact neighborhood around , must be chosen from a compact set, so by Berge’s theorem of the maximum the infimizers for are upper hemi-continuous in the utility promise. Since for all , this implies and hence the subderivative of as must converge to (noting a subderivative exists since is concave for all ). This implies a smooth-pasting property must hold so is globally concave in for fixed prior , as desired. ∎
PROOF OF PROPOSITION 3
Proof.
Fix some discount rate . Normalize payoffs so that . Consider the following strategy for some fixed for Sender.
-
(1)
At periods which are divisible by , reveal the state, pay nothing, and recommend .
-
(2)
In all other periods where is nonzero, if Receiver listened to Sender’s recommendation at the most recent time where , then revert to the static optimal information policy , and recommend to Receiver.
-
(3)
Otherwise, if Receiver deviated at , give no information and recommend .
These are strategies that split histories into times (i.e. the clock resets every periods) and which extract surplus from Receiver at exactly the beginning of each period. If we are in cases (2) or (3), the recommended action is clearly incentive compatible for Receiver since it is myopically optimal and has no affect on Receiver’s continuation value.
Finally, suppose we are in case (1). Define to be Receiver’s deviation gain at state , and let . Let
be the amount by which Receiver benefits from persuasion. Incentive compatibility is then satisfied so long as . Pick sufficiently large so that , noting this must exist since . Continuity in then implies there exists some where this inequality holds (strictly) at as well. Moreover, since is increasing in for any fixed , this inequality must also hold for all . Thus Receiver cannot have any profitable deviation in the periods where Sender adopts full information. This implies that the recommendation policy for Receiver is incentive compatible at all histories. Finally, since Sender doesn’t attain first best, in periods divisible by they outperform their static persuasion payoff.
To see that the limiting inequality holds, recall
for any bounded sequence of numbers . In particular, this implies that as , the above strategy converges to the average payoff attained in all periods. Since Sender outperforms their static persuasion payoff of the time by a bounded amount, their total average payoff must also increase and hence we obtain the asymptotic result. ∎
PROOF OF PROPOSITION 4
Proof.
Let be the joint distribution over states and actions induced by the static optimum at . We start with two observations.
-
(1)
and .
-
(2)
Receiver does not benefit from persuasion: that is, at the static optimum, their expected payoff is , which is the same as their no-information payoff.
-
(3)
Conditional on , Sender and Receiver play a zero-sum game.
Now suppose not. Then there exists some where Sender benefits from dynamics. Fix any history at which the static optimum is not played and at which Sender’s continuation payoff starting at is higher than their repeated static payoff. Since Receiver does not benefit from persuasion, Sender can only increase their payoff at in a zero-sum way: either by promising a higher probability of choosing at in the future, or by paying the agent. But since Sender’s payoff and Receiver’s payoff over the static optimum allocation is zero-sum, the utility promise to Receiver necessary to induce this allocation is equal to the benefit Sender attains today, meaning this cannot be profitable over the static optimum. Hence the repeated static allocatin must be optimal. ∎
References
- (1)
- Aumann and Maschler (1995) Aumann, Robert J., and Michael B. Maschler. 1995. Repeated Games with Incomplete Information. Cambridge, MA: MIT Press.
- Babichenko et al. (2021) Babichenko, Yakov, Inbal Talgam-Cohen, and Konstantin Zabarnyi. 2021. “Bayesian Persuasion under Ex Ante and Ex Post Constraints.” arXiv preprint arXiv:2012.03272.
- Barros (2025) Barros, Lucas. 2025. “Information Acquisition in Cheap Talk.” arXiv preprint.
- Bergemann and Bonatti (2024) Bergemann, Dirk, and Alessandro Bonatti. 2024. “Data, Competition, and Digital Platforms.” American Economic Review 114 (8): 2553–2595. 10.1257/aer.20230478.
- Bergemann et al. (2025) Bergemann, Dirk, Alessandro Bonatti, and Nicholas Wu. 2025. “How do Digital Advertising Auctions Impact Product Prices?” The Review of Economic Studies, Extended Abstract (EC ’23).
- Bergemann et al. (2015) Bergemann, Dirk, Benjamin Brooks, and Stephen Morris. 2015. “The Limits of Price Discrimination.” American Economic Review 105 (3): 921–957. 10.1257/aer.20130436.
- Bergemann et al. (2022) Bergemann, Dirk, Tibor Heumann, Stephen Morris, Constantine Sorokin, and Eyal Winter. 2022. “Optimal Information Disclosure in Classic Auctions.” American Economic Review: Insights 4 (3): 371–388. 10.1257/aeri.20210307.
- Bergemann and Morris (2016) Bergemann, Dirk, and Stephen Morris. 2016. “Bayes Correlated Equilibrium and the Comparison of Information Structures in Games.” 10.3982/TE1850.
- Bergemann and Morris (2019) Bergemann, Dirk, and Stephen Morris. 2019. “Information Design: A Unified Perspective.” Journal of Economic Literature 57 (1): 44–95. 10.1257/jel.20181489.
- Bergemann and Pesendorfer (2007) Bergemann, Dirk, and Martin Pesendorfer. 2007. “Information Structures in Optimal Auctions.” Journal of Economic Theory 137 (1): 580–609. 10.1016/j.jet.2007.02.001.
- Dai and Koh (2024) Dai, Yifan, and Andrew Koh. 2024. “Flexible Demand Manipulation.” SSRN Working Paper 4724126, Massachusetts Institute of Technology (MIT). 10.2139/ssrn.4724126.
- Doval and Skreta (2023) Doval, Laura, and Vasiliki Skreta. 2023. “Constrained Information Design.” Mathematics of Operations Research 49 (1): . 10.1287/moor.2022.1346.
- Doval and Smolin (2024) Doval, Laura, and Alex Smolin. 2024. “Persuasion and Welfare.” Journal of Political Economy 132 (7): 1234–1267. 10.1086/729067.
- Dworczak and Muir (2025) Dworczak, P., and E. Muir. 2025. “A Mechanism-Design Approach to Property Rights.”
- Ely (2017) Ely, Jeffrey C. 2017. “Beeps.” American Economic Review 107 (1): 31–53. 10.1257/aer.20150218.
- Ely et al. (2022) Ely, Jeffrey C., George Georgiadis, Sina Khorasani, and Luis Rayo. 2022. “Optimal Feedback in Contests.” Review of Economic Studies 0 1–25. 10.1093/restud/rdac074, Advance access publication 28 October 2022.
- Eso and Szentes (2007) Eso, Peter, and Balazs Szentes. 2007. “Optimal Information Disclosure in Auctions and the Handicap Auction.” Review of Economic Studies 74 (3): 705–731. 10.1111/j.1467-937X.2007.00437.x.
- Gao and Luo (2025) Gao, Eric, and Daniel Luo. 2025. “Prior-Free Predictions for Persuasion.” arXiv preprint arXiv:2312.02465.
- Gao (2024) Gao, Ying. 2024. “Inference from Selectively Disclosed Data.”
- Gitmez and Sonin (2023) Gitmez, A. Arda, and Konstantin Sonin. 2023. “The Dictator’s Dilemma: A Theory of Propaganda and Repression.” University of Chicago, Becker Friedman Institute for Economics Working Paper. 10.2139/ssrn.4451613.
- Guo and Hörner (2020) Guo, Yingni, and Johannes Hörner. 2020. “Dynamic Allocation without Money.” TSE Working Papers 20-1133, Toulouse School of Economics (TSE).
- Holmström (2017) Holmström, Bengt. 2017. “Pay for Performance and Beyond.” American Economic Review 107 (7): 1753–1777. 10.1257/aer.107.7.1753.
- Kamenica (2019) Kamenica, Emir. 2019. “Bayesian Persuasion and Information Design.” Annual Review of Economics 11 (1): 249–272. 10.1146/annurev-economics-080218-025739.
- Kamenica and Gentzkow (2011) Kamenica, Emir, and Matthew Gentzkow. 2011. “Bayesian Persuasion.” American Economic Review 101 (6): 2590–2615. 10.1257/aer.101.6.2590.
- Kleiner et al. (2024) Kleiner, Andreas, Benny Moldovanu, Philipp Strack, and Mark Whitmeyer. 2024. “The Extreme Points of Fusions.” arXiv preprint arXiv:2409.10779, https://doi.org/10.48550/arXiv.2409.10779, Last revised February 10, 2025.
- Koh and Sanguanmoo (2022) Koh, Andrew, and Sivakorn Sanguanmoo. 2022. “Attention Capture.”
- Koh et al. (2024) Koh, Andrew, Sivakorn Sanguanmoo, and Weijie Zhong. 2024. “Persuasion and Optimal Stopping.”
- Kosenko (2023) Kosenko, Andrew. 2023. “Constrained Persuasion with Private Information.” The BE Journal of Theoretical Economics 23 (1): 345–370. 10.1515/bejte-2023-0030.
- Levin (2003) Levin, Jonathan. 2003. “Relational Incentive Contracts.” American Economic Review 93 (3): 835–857. 10.1257/000282803322157035.
- Li (2017) Li, Cheng. 2017. “A model of Bayesian persuasion with transfers.” Economics Letters 161 93–95. 10.1016/j.econlet.2017.09.036.
- Lipnowski and Ravid (2020) Lipnowski, Elliot, and Doron Ravid. 2020. “Cheap Talk with Transparent Motives.” Econometrica 88 (4): 1631–1660.
- Lorecchio and Monte (2023) Lorecchio, Caio, and Daniel Monte. 2023. “Dynamic Information Design under Constrained Communication Rules.” American Economic Journal: Microeconomics 15 (1): 359–398. 10.1257/mic.20200356.
- Mathevet et al. (2020) Mathevet, Laurent, Jacopo Perego, and Ina Taneva. 2020. “On Information Design in Games.” Journal of Political Economy 128 (4): 1346–1382. 10.1086/705332.
- Matysková and Montes (2023) Matysková, Ludmila, and Alfonso Montes. 2023. “Bayesian Persuasion with Costly Information Acquisition.” Journal of Economic Theory 211 105678. 10.1016/j.jet.2023.105678.
- Ravid et al. (2022) Ravid, Doron, Anne-Katrin Roesler, and Balázs Szentes. 2022. “Learning before Trading: On the Inefficiency of Ignoring Free Information.” Journal of Political Economy 130 (2): 431–464. 10.1086/717350.
- Rayo and Segal (2010) Rayo, Luis, and Ilya Segal. 2010. “Optimal Information Disclosure.” Journal of Political Economy 118 (5): 949–987. 10.1086/657922.
- Renault et al. (2017) Renault, Jérôme, Eilon Solan, and Nicolas Vieille. 2017. “Optimal dynamic information provision.” Games and Economic Behavior 104 329–349.
- Rockafellar (1996) Rockafellar, Ralph. 1996. Convex Analysis. Princeton University Press.
- Sannikov (2008) Sannikov, Yuliy. 2008. “A Continuous-Time Version of the Principal-Agent Problem.” Review of Economic Studies 75 (3): 957–984. 10.1111/j.1467-937X.2008.00492.x.
- Smolin and Yamashita (2023) Smolin, Alex, and Takuro Yamashita. 2023. “Information Design in Smooth Games.”
- Taneva (2019) Taneva, Ina. 2019. “Information Design.” American Economic Journal: Microeconomics 11 (4): 151–185. 10.1257/mic.20170152.
- Terstiege and Wasser (2022) Terstiege, Stefan, and Cédric Wasser. 2022. “Competitive Information Disclosure to an Auctioneer.” American Economic Journal: Microeconomics 14 (3): 622–664. 10.1257/mic.20200027.
- Thomas and Worrall (1990) Thomas, Jonathan, and Tim Worrall. 1990. “Income Fluctuation and Asymmetric Information: An Example of a Repeated Principal-Agent Problem.” Journal of Economic Theory 51 (2): 367–390. 10.1016/0022-0531(90)90023-D.
- Treust and Tomala (2019) Treust, Mikaël Le, and Tristan Tomala. 2019. “Persuasion with Limited Communication Capacity.” Journal of Economic Theory 184 104940. 10.1016/j.jet.2019.104940.