Deep Learning with Parametric Lenses
Abstract.
We propose a categorical semantics for machine learning algorithms in terms of lenses, parametric maps, and reverse derivative categories. This foundation provides a powerful explanatory and unifying framework: it encompasses a variety of gradient descent algorithms such as ADAM, AdaGrad, and Nesterov momentum, as well as a variety of loss functions such as MSE and Softmax cross-entropy, and different architectures, shedding new light on their similarities and differences. Furthermore, our approach to learning has examples generalising beyond the familiar continuous domains (modelled in categories of smooth maps) and can be realised in the discrete setting of Boolean and polynomial circuits. We demonstrate the practical significance of our framework with an implementation in Python.
Key words and phrases:
Neural network, Deep Learning, String diagram, Symmetric Monoidal Category, Cartesian Differential Category1. Introduction
The last decade has witnessed a surge of interest in machine learning, fuelled by the numerous successes and applications that these methodologies have found in many fields of science and technology. As machine learning techniques become increasingly pervasive, algorithms and models become more sophisticated, posing a significant challenge both to the software developers and the users that need to interface, execute and maintain these systems. In spite of this rapidly evolving picture, the formal analysis of many learning algorithms mostly takes place at a heuristic level [48], or using definitions that fail to provide a general and scalable framework for describing machine learning. Indeed, it is commonly acknowledged through academia, industry, policy makers and funding agencies that there is a pressing need for a unifying perspective, which can make this growing body of work more systematic, rigorous, transparent and accessible both for users and developers [41, 52].
Consider, for example, one of the most common machine learning scenarios: supervised learning with a neural network. This technique trains the model towards a certain task, e.g. the recognition of patterns in a data set (cf. Figure 1). There are several different ways of implementing this scenario. Typically, at their core, there is a gradient update algorithm (often called the “optimiser”), depending on a given loss function, which updates in steps the parameters of the network, based on some learning rate controlling the “scaling” of the update. All of these components can vary independently in a supervised learning algorithm and a number of choices is available for loss maps (quadratic error, Softmax cross entropy, dot product, etc.) and optimisers (Adagrad [23], Momentum [44], and Adam [36], etc.).

This scenario highlights several questions: is there a uniform mathematical language capturing the different components of the learning process? Can we develop a unifying picture of the various optimisation techniques, allowing for their comparative analysis? Moreover, it should be noted that supervised learning is not limited to neural networks. For example, supervised learning is surprisingly applicable to the discrete setting of boolean circuits [59] where continuous functions are replaced by boolean-valued functions. Can we identify an abstract perspective encompassing both the real-valued and the boolean case? In a nutshell, this paper seeks to answer the question:
what are the fundamental mathematical structures
underpinning gradient-based learning?
Our approach to this question stems from the identification of three fundamental aspects of the gradient-descent learning process:
-
(i)
computation is parametric, e.g. in the simplest case we are given a function and learning consists of finding a parameter such that is the best function according to some criteria. Specifically, the weights on the internal nodes of a neural network are a parameter which the learning is seeking to optimize. Parameters also arise elsewhere, e.g. in the loss function (see later).
-
(ii)
information flows bidirectionally: in the forward direction, the computation turns inputs via a sequence of layers into predicted outputs, and then into a loss value; in the reverse direction, backpropagation is used to propagate the changes backwards through the layers, and then turn them into parameter updates.
-
(iii)
the basis of parameter update via gradient descent is differentiation e.g. in the simple case we differentiate the function mapping a parameter to its associated loss to reduce that loss.
We model bidirectionality via lenses [13, 6, 33] and based upon the above three insights, we propose the notion of parametric lens as the fundamental semantic structure of learning. In a nutshell, a parametric lens is a process with three kinds of interfaces: inputs, outputs, and parameters. On each interface, information flows both ways, i.e. computations are bidirectional. These data are best explained with our graphical representation of parametric lenses, with inputs , , outputs , , parameters , , and arrows indicating information flow (below left). The graphical notation also makes evident that parametric lenses are open systems, which may be composed along their interfaces (below center and right).
(1) |
This pictorial formalism is not just an intuitive sketch: as we will show, it can be understood as a completely formal (graphical) syntax using the formalism of string diagrams [43], in a way similar to how other computational phenomena have been recently analysed e.g. in quantum theory [16], control theory [4, 8], and digital circuit theory [29].
It is intuitively clear how parametric lenses express aspects (I) and (II) above, whereas (III) will be achieved by studying them in a space of ‘differentiable objects’ (in a sense that will be made precise). The main technical contribution of our paper is showing how the various ingredients involved in learning (the model, the optimiser, the error map and the learning rate) can be uniformly understood as being built from parametric lenses.
We will use category theory as the formal language to develop our notion of parametric lenses, and make Figure 2 mathematically precise. The categorical perspective brings several advantages, which are well-known, established principles in programming language semantics [58, 1, 47]. Three of them are particularly important to our contribution, as they constitute distinctive advantages of our semantic foundations:
- :
-
Our approach studies which categorical structures are sufficient to perform gradient-based learning. This analysis abstracts away from the standard case of neural networks in several different ways: as we will see, it encompasses other models (namely Boolean circuits), different kinds of optimisers (including Adagrad, Adam, Nesterov momentum), and error maps (including quadratic and softmax cross entropy loss). These can be all understood as parametric lenses, and different forms of learning result from their interaction.
- :
-
As seen in Figure 1, learning involves ingredients that are seemingly quite different: a model, an optimiser, a loss map, etc. We will show how all these notions may be seen as instances of the categorical definition of a parametric lens, thus yielding a remarkably uniform description of the learning process, and supporting our claim of parametric lenses being a fundamental semantic structure of learning.
- :
-
The use of categorical structures to describe computation naturally enables compositional reasoning whereby complex systems are analysed in terms of smaller, and hence easier to understand, components. Compositionality is a fundamental tenet of programming language semantics; in the last few years, it has found application in the study of diverse kinds of computational models, across different fields— see e.g. [53, 28, 16, 8]. As made evident by Figure 2, our approach models a neural network as a parametric lens, resulting from the composition of simpler parametric lenses, capturing the different ingredients involved in the learning process. Moreover, as all the simpler parametric lenses are themselves composable, one may engineer a different learning process by simply plugging a new lens on the left or right of existing ones. This means that one can glue together smaller and relatively simple networks to create larger and more sophisticated neural networks.
We now give a synopsis of our contributions:
-
•
In Section 2, we introduce the tools necessary to define our notion of parametric lens. First, in Section 2.1, we introduce a notion of parametric categories, which amounts to a functor turning a category into one of ‘parametric -maps’. Second, we recall lenses (Section 2.2). In a nutshell, a lens is a categorical morphism equipped with operations to view and update values in a certain data structure. Lenses play a prominent role in functional programming [56], as well as in the foundations of database theory [35] and more recently game theory [28]. Considering lenses in simply amounts to the application of a functorial construction , yielding . Finally, we recall the notion of a cartesian reverse differential category (CRDC): a categorical structure axiomatising the notion of differentiation [15] (Section 2.4). We wrap up in Section 2.3, by combining these ingredients into the notion of parametric lens, formally defined as a morphism in for a CRDC . In terms of our desiderata (I)-(III) above, note that accounts for (I), accounts for (II), and the CRDC structure accounts for (III).
-
•
As seen in Figure 1, in the learning process there are many components at work: the model, the optimiser, the loss map, the learning rate, etc.. In Section 3, we show how the notion of parametric lens provides a uniform characterisation for such components. Moreover, for each of them, we show how different variations appearing in the literature become instances of our abstract characterisation. The plan is as follows:
-
In Section 3.1, we show how the combinatorial model subject of the training can be seen as a parametric lens. The conditions we provide are met by the ‘standard’ case of neural networks, but also enables the study of learning for other classes of models. In particular, another instance are Boolean circuits: learning of these structures is relevant to binarisation [18] and it has been explored recently using a categorical approach [59], which turns out to be a particular case of our framework. We continue by describing internals of a model, and translating several exampls of models in deep learning to their categorical form. This includes linear layers, biases, activations, convolutional layers, but also general techniques such as weight tying and batching.
-
In Section 3.3, we model the learning rate as a parametric lens. This analysis also allows us to contrast how learning rate is handled in the ‘real-valued’ case of neural networks with respect to the ‘Boolean-valued’ case of Boolean circuits.
-
In Section 3.4, we show how optimisers can be modelled as ‘reparameterisations’ of models as parametric lenses. As case studies, in addition to basic gradient ascent and descent, we consider the stateful variants: Momentum [44], Nesterov Momentum [57], Adagrad [23], and Adam (Adaptive Moment Estimation) [36], as well as optimiser composition (Subsection 3.4.1). Also, on Boolean circuits, we show how the reverse derivative ascent of [59] can be also regarded in such way.
-
-
•
In Section 4, we study how the composition of the lenses defined in Section 3 yields a description of different kinds of learning processes.
-
Section 4.1 is dedicated to modelling supervised learning of parameters, in the way described in Figure 1. This amounts essentially to study of the composite of lenses expressed in Figure 2, for different choices of the various components. In particular we look at (i) quadratic loss with basic gradient descent, (ii) softmax cross entropy loss with basic gradient descent, (iii) quadratic loss with Nesterov momentum, and (iv) learning in Boolean circuits with XOR loss and basic gradient ascent.
-
In Section 4.2 we describe how a system traditionally considered as unsupervised can be recast to its supervised form: Generative Adversarial Networks ([30, 3]). We define this model abstractly as a parametric lens, and describe how a particular instantiation thereof — Wasserstein GAN ([3]) — arises as a supervised learning system with the dot product loss and the gradient descent-ascent optimiser.
-
In order to showcase the flexibility of our approach, in Section 4.3 we depart from our ‘core’ case study of parameter learning, and turn attention to supervised learning of inputs, also called deep dreaming — the idea behind this technique is that, instead of the network parameters, one updates the inputs, in order to elicit a particular interpretation [40, 38, 22, 51]. Deep dreaming can be easily expressed within our approach, with a different rearrangement of the parametric lenses involved in the learning process, see (11) below. The abstract viewpoint of categorical semantics provides a mathematically precise and visually captivating description of the differences between the usual parameter learning process and deep dreaming.
-
-
•
In Section 5 we describe a proof-of-concept Python implementation, available at [19], based on the theory developed in this paper. This code is intended to show more concretely the payoff of our approach. Model architectures, as well as the various components participating in the learning process, are now expressed in a uniform, principled mathematical language, in terms of lenses. As a result, computing network gradients is greatly simplified, as it amounts to lens composition. Moreover, the modularity of this approach allows one to more easily tune the various parameters of training. We show our library via a number of experiments, and prove correctness by achieving accuracy on par with an equivalent model in Keras, a mainstream deep learning framework [12]. In particular, we create a working non-trivial neural network model for the MNIST image-classification problem [37].
- •
Note this paper extends a previous conference version [20]. Section 3.1 has been extended with examples of different architectures and techniques in deep learning, while Section 4.2, which considers unsupervised learning, and Remark 2, followed by a discussion on the axioms of CRDCs needed in our framework, are new. Also, we included missing proofs and complementary background material (see in particular Appendices A-B).
2. Categorical Toolkit
In this section we describe the three categorical components of our framework, each corresponding to an aspect of gradient-based learning: (I) the construction (Section 2.1), which builds a category of parametric maps, (II) the construction, which builds a category of “bidirectional” maps (Section 2.2), and (III) the combination of these two constructions into the notion of “parametric lenses” (Section 2.3). Finally (IV) we recall Cartesian reverse differential categories — categories equipped with an abstract gradient operator.
Notation
We shall use for sequential composition of morphisms and in a category, for the identity morphism on , and for the unit object of a symmetric monoidal category.
2.1. Parametric Maps
In supervised learning one is typically interested in approximating a function for some and . To do this, one begins by building a neural network, which is a smooth map where is the set of possible weights of that neural network. Then one looks for a value of such that the function closely approximates . We formalise these maps categorically via the construction [26, 27, 10, 34].
[Parametric category] Let be a strict111One can also define in the case when is non-strict; however, the result would be not a category but a bicategory. symmetric monoidal category. We define a category with
-
•
objects those of ;
-
•
a map from to is a pair , with an object of and ;
-
•
the identity on is the pair (since is strict monoidal, );
-
•
the composite of maps and is the pair ;
Take the category whose objects are natural numbers and whose morphisms are smooth maps from to . This is a strict symmetric monoidal category with product given by addition. As described above, the category can be thought of as a category of neural networks: a map in this category from to consists of a choice of and a map with representing the set of possible weights of the neural network.
As we will see in the next sections, the interplay of the various components at work in the learning process becomes much clearer once represented the morphisms of using the pictorial formalism of string diagrams, which we now recall. In fact, we will mildly massage the traditional notation for string diagrams (below left), by representing a morphism in as below right.
{tikzpicture}
{tikzpicture}
|
This is to emphasise the special role played by , reflecting the fact that in machine learning data and parameters have different semantics. String diagrammatic notations also allows to neatly represent composition of maps and (below left), and “reparameterisation” of by a map (below right), yielding a new map .
(2) |
Intuitively, reparameterisation changes the parameter space of to some other object , via some map . We shall see later that gradient descent and its many variants can naturally be viewed as reparameterisations.
Note coherence rules in combining the two operations in (2) just work as expected, as these diagrams can be ultimately ‘compiled’ down to string diagrams for monoidal categories.
2.2. Lenses
In machine learning (or even learning in general) it is fundamental that information flows both forwards and backwards: the ‘forward’ flow corresponds to a model’s predictions, and the ‘backwards’ flow to corrections to the model. The category of lenses is the ideal setting to capture this type of structure, as it is a category consisting of maps with both a “forward” and a “backward” part.
For any Cartesian category , the category of (bimorphic) lenses in , , is the category with the following data:
-
•
Objects are pairs of objects in , written as ;
-
•
A map from to consists of a pair (also written as ) where (called the get or forward part of the lens) and (called the put or backwards part of the lens);
-
•
The identity on is the pair ;
-
•
The composite of and is given by get and put .
The embedding of into the category of Tambara modules over (see [7, Thm. 23]) provides a rich string diagrammatic language, in which lenses may be represented with forward/backward wires indicating the information flow. In this language, a morphism is written as below left, which can be ‘expanded’ as below right.
{tikzpicture}
{tikzpicture}
|
It is clear in this language how to describe the composite of and :
{tikzpicture}
|
(3) |
Remark 1.
Note is a monoidal category, with defined as . However, in general is not itself Cartesian. This is easy to see when looking at even a terminal object: if is a terminal object in , then in general will not be a terminal object in — it if was, there would be a unique lens whose put part would need to be a (unique) map , but in general there are many such maps.
2.3. Parametric Lenses
The fundamental category where supervised learning takes place is the composite of the two constructions in the previous sections: {defi} The category of parametric lenses on is defined as follows.
-
•
Its objects are pairs of objects of objects from ;
- •
String diagrams for parametric lenses are built by simply composing the graphical languages of the previous two sections — see (1), where respectively a morphism, a composition of morphisms, and a reparameterisation are depicted.
Given a generic morphism in as depicted in (1) on the left, one can see how it is possible to “learn” new values from : it takes as input an input , a parameter , and a change , and outputs a change in , a value of , and a change . This last element is the key component for supervised learning: intuitively, it says how to change the parameter values to get the neural network closer to the true value of the desired function.
The question, then, is how one is to define such a parametric lens given nothing more than a neural network, ie., a parametric map . This is precisely what the gradient operation provides, and its generalization to categories is explored in the next subsection.
2.4. Cartesian Reverse Differential Categories
Fundamental to all types of gradient-based learning is, of course, the gradient operation. In most cases this gradient operation is performed in the category of smooth maps between Euclidean spaces. However, recent work [59] has shown that gradient-based learning can also work well in other categories; for example, in a category of boolean circuits. Thus, to encompass these examples in a single framework, we will work in a category with an abstract gradient operation.
A Cartesian left additive category [15, Defn. 1] consists of a category with chosen finite products (including a terminal object), and an addition operation and zero morphism in each homset, satisfying various axioms. A Cartesian reverse differential category (CRDC) [15, Defn. 13] consists of a Cartesian left additive category , together with an operation which provides, for each map , a map satisfying various axioms (see (Def. B)).
For , the pair forms a lens from to . We will pursue the idea that acts as backwards map, thus giving a means to “learn”.
Note that assigning type to hides some relevant information: -values in the domain and -values in the codomain of do not play the same role as values of the same types in : in , they really take in a tangent vector at and output a tangent vector at (cf. the definition of in , Example 2.4 below). To emphasise this, we will type as a map (even though in reality and ), thus meaning that is actually a lens from to . This typing distinction will be helpful later on, when we want to add additional components to our learning algorithms.
The following two examples of CRDCs will serve as the basis for the learning scenarios of the upcoming sections.
The category (Example 2.1) is Cartesian with product given by addition, and it is also a Cartesian reverse differential category: given a smooth map , the map sends a pair to : the transpose of the Jacobian of at in the direction . For example, if is defined as , then is given by
Using the reverse derivative (as opposed to the forward derivative) is well-known to be much more computationally efficient for functions when (for example, see [31]), as is the case in most supervised learning situations (where often ).
Another CRDC is the symmetric monoidal category [15, Example 14] with objects the natural numbers and morphisms the -tuples of polynomials . When presented by generators and relations these morphisms can be viewed as a syntax for boolean circuits, with parametric lenses for such circuits (and their reverse derivative) described in [59].
Remark 2.
The definition of a CRDC (see Def. B) satisfies 7 axioms describing the interaction of the reverse differential operator with the rest of the cartesian left-additive structure. As pointed out in [14, Sec. 2.3], the last two axioms are independent of the others. In this paper we will additionally see that these two axioms are also not needed to compositionally model the update step of supervised learning.
The remark above can be stated abstractly in two steps. Firstly, we note that a CRDC for each morphism defines a lens whose backwards map is additive in the second component (see Def. B for the definition of additivity in the second component). This defines a subcategory of , and the following functor. {restatable}theoremLensFunctorCLAC Lenses with backward passes additive in the second component form a functor
Proof 2.1.
The second step is the observation that a coalgebra of this functor gives us a choice of a cartesian left-additive category and a cartesian left-additive functor such that a number of axioms are satisfied: precisely the first five axioms of a CRDC.
[(compare [15, Prop. 31])]propFiveOutOfSeven A coalgebra of the copointed endofunctor gives rise to a cartesian left-additive category equipped with a reverse differential combinator which satisfies the first five axioms of a cartesian reverse derivative category.
Proof 2.2.
A coalgebra of consists of a category and a cartesian left-additive functor such that the following diagram commutes.
(4) |
The data of the functor is equivalent to the data of a reverse differential combinator : every map in is mapped it to a lens whose forward part is by the Eq. 4 above restricted to be itself, leaving the only choice involved in this functor the one of the backward part. What remains to prove is that this backward part satisfies the first five axioms of a CRDC. We do this in Appendix B.
We will see in the next section how only the data of the functor is used to model supervised learning, justifying our claim that only the first five axioms of a CRDC are used.
3. Components of learning as Parametric Lenses
As seen in the introduction, in the learning process there are many components at work: a model, an optimiser, a loss map, a learning rate, etc. In this section we show how each such component can be understood as a parametric lens. Moreover, for each component, we show how our framework encompasses several variations of the gradient-descent algorithms, thus offering a unifying perspective on many different approaches that appear in the literature.
3.1. Models as Parametric Lenses
We begin by characterising the models used for training as parametric lenses. In essence, our approach identifies a set of abstract requirements necessary to perform training by gradient descent, which covers the case studies that we will consider in the next sections.
The leading intuition is that a suitable model is a parametric map, equipped with a reverse derivative operator. Using the formal developments of Section 2, this amounts to assuming that a model is a morphism in , for a CRDC . In order to visualise such morphism as a parametric lens, it then suffices to apply under the canonical morphism (which exists for any CRDC , see Prop. 2.1)333Here we are treating as postcomposed with the inclusion ., mapping to . This yields a functor , pictorially defined as
(5) |
[Neural networks] As noted previously, to learn a function of type , one constructs a neural network, which can be seen as a function of type where is the space of parameters of the neural network. As seen in Example 2.1, this is a map in the category of type with parameter space . Then one can apply the functor in (5) to present a neural network together with its reverse derivative operator as a parametric lens, i.e. a morphism in .
[Boolean and Polynomial circuits] For learning of Boolean circuits as described in [59], the recipe is the same as in Example 3.1, except that the base category is (see Example 2.4). The important observation here is that is a CRDC, see [15, 59], and thus we can apply the functor in (5). Note this setting can be generalised to circuits over any polynomial ring, see [61].
Note a model/parametric lens takes as inputs an element of , a parameter , an element of (a change in ) and outputs an element of , a change in , and a change in . This is not yet sufficient to do machine learning! When we perform learning, we want to input a parameter and a pair and receive a new parameter . Instead, expects a change in (not an element of ) and outputs a change in (not an element of ). Deep dreaming, on the other hand, wants to return an element of (not a change in ). Thus, to do machine learning (or deep dreaming) we need to add additional components to ; we will consider these additional components in the next sections.
We now proceed to describe the internals of a model, and translate several examples of models in deep learning to their categorical form. But before doing so, we clarify some terminology. While ‘layers’ and ‘models’ are both parametric maps, the former typically refers to components of larger models, while the latter refers to the final model to be learned in the manner described in Section 4.1).
Remark 3.
An unfortunate ambiguity in deep learning terminology is the second meaning of ‘layer’. For example, the ‘hidden layer’ of a model refers to internal values of a neural network, corresponding to the ‘wires’ of a string diagram. We will avoid using the term ‘layer’ in this sense unless explicitly noted.
In deep learning, one often speaks of ‘model architectures’. This can mean either a specific model (e.g., ResNet50 [32]) or a family of models employing a particular technique. For example, one says a model has a ‘convolutional architecture’ when it contains a convolutional layer (Example 3.1.1). Examples of layers, models, and architectures are given in the following sections.
3.1.1. Layers
The dense or fully-connected layer is a component of many neural network architectures. In the categorical viewpoint, a dense layer is the composition of linear, bias, and activation layers, which we describe first. Unless explicitly noted, we will assume for simplicity that most layers are maps in . However, many of the maps defined here only require a multiplication map as additional structure, and so can be defined in any cartesian distributive category [60] such as .
[Linear layer] A linear or matrix-multiply layer is the parametric map , where is the matrix-vector product of interpreted as matrix coefficients and the vector .
Note that layers are best thought of as families of maps. For example, there is a linear layer morphism for all choices of dimension and .
[Bias layer] A bias layer is a parametric map , where is the cartesian left-additive addition map. The reverse derivative of is the copy map, so we may depict the bias layer as a parametric lens as below.
An activation layer is a typically nonlinear, trivially parametric map often applied to the output of another layer. Many are simply the -fold tensor product of a univariate function .
[Sigmoid Activation] The sigmoid activation layer is the -fold tensor product of the sigmoid function .
Note that unlike other layers considered so far, while sigmoid is a map in , it is not a map in . An example of an activation layer which is not in is the ReLU map.
[ReLU Activation] The ‘Rectified Linear Unit’ activation function is the map where is the positive indicator function. The activation layer is again the -fold tensor product of this function. Although ReLU is not a smooth map, some presentations of RDCs can be extended via Theorem 3.1 of [60] to include the positive indicator function . The function can then be expressed in terms of this function, and its reverse derivative can be derived as .
The combination of linear, bias, and choice of activation layer gives a dense or fully-connected layer.
[Dense Layer] A dense or fully-connected layer is the following composition.
{tikzpicture}
|
(6) |
Where some choice of input dimension , output dimension , and activation layer is assumed. Note that the activation layer has no parameters.
The final two examples of layers we cover here are convolutional and max-pooling layers, which are common in models for image-processing tasks.
[Convolutional Layer] A convolutional layer is a map where denotes the discrete 2D convolution of a kernel and an image. The output of a convolutional layer is an image with .
A number of further variations of the convolutional layer exist in the literature, but the basic idea is to use 2D convolution to map a kernel (the parameters) over the input. Convolutional layers are frequently composed with max-pooling layers.
[Max-Pooling Layer] A max-pooling layer computes the maximum of each of the size- subregions of the input image.
However, as with the layer, max-pooling layers cannot be thought of as maps in . Nevertheless, by again appealing to [60, Theorem 3.1], one can extend a presentation of RDCs to include a function from which the max-pooling layer and its reverse derivative can be derived.
3.1.2. Architectures
We now consider some examples of neural network architectures defined in terms of the layers above. Since both layers and architectures are just parametric maps, we can consider the layers by themselves as architectures already, and in fact the linear and dense layers are sufficient to solve some simple machine learning problems. The first non-trivial architecture we consider is the ‘single hidden layer’ network.
[Hidden Layer Network] A neural network with a single ‘hidden layer’ (in the sense of Remark 3) is the composition of two maps.
{tikzpicture}
|
We emphasize that the term ‘hidden layer’ here ambiguously refers to the central wires labeled rather than the dense morphisms.
This architecture is demonstrated in detail in the experiments [19] accompanying this paper. Also included in our experiments is a convolutional model for classifying images of handwritten digits (the MNIST [37] dataset). A simplified version is below.
[Convolutional Architecture] First, define a layer as the composition of a convolution, max-pooling, and layer.
where . One can then define a convolutional architecture for classifying -pixel images of the MNIST dataset into digits as follows.
{tikzpicture}
|
Lastly, we mention the architecture of Generative Adversarial Networks which e define and thoroughly discuss in Section 4.2.
3.1.3. Weight Tying and Batching
A number of general techniques are employed in designing deep learning models. We now describe two examples of these techniques in terms of their categorical interpretations. The first is weight-tying, which is required in Subsection 4.2 to define a more complex architecture for unsupervised learning: the GAN.
[Weight Tying] Weight tying is the sharing of parameters between two different components. Categorically speaking, this means using the copy map on parameters as below.
{tikzpicture}
|
Note that and have the same parameters, but might be applied to different parts of the input. In this sense, one can think of convolutional layers (Example 3.1.1) as using weight-tying.
A related technique is batching. So far, we have considered learning as updating a model with a single data example at each timestep. However, it is common for efficiency purposes to update the model using a batch of examples: a finite number of examples for .
[Batching] Suppose we have a model . The batched model with batch size is a parametric map where consists of copies of applied to each input, but with the same parameters. For example, when , the batch update is as follows.
{tikzpicture}
|
The above diagrams highlight the relationship between weight-tying and batching. However, note that these techniques serve different purposes: while weight-tying can be used to reduce the number of weights in a model, batching is used for efficiency reasons to update a model with multiple examples in parallel.
3.2. Loss Maps as Parametric Lenses
Another key component of any learning algorithm is the choice of loss map. This gives a measurement of how far the current output of the model is from the desired output. In standard learning in , this loss map is viewed as a map of type . However, in our setup, this is naturally viewed as a parametric map from to with parameter space .444Here the loss map has its parameter space equal to its input space. However, putting loss maps on the same footing as models lends itself to further generalizations where the parameter space is different, and where the loss map can itself be learned. See Generative Adversarial Networks, [10, Figure 7.]. We also generalize the codomain to an arbitrary object .
A loss map on consists of a parametric map for some object .
Note that we can precompose a loss map with a neural network (below left), and apply the functor in (5) (with ) to obtain the parametric lens below right.
(7) |
This is getting closer to the parametric lens we want: it can now receive inputs of type . However, this is at the cost of now needing an input to ; we consider how to handle this in the next section.
[Quadratic error] In , the standard loss function on is quadratic error: it uses and has parametric map given by
where we think of as the “true” value and the predicted value. This has reverse derivative given by — note suggests the idea of learning rate, which we will explore in Section 3.3.
[Boolean error] In , the loss function on which is implicitly used in [59] is a bit different: it uses and has parametric map given by
(Note that this is in ; equivalently this is given by XOR.) Its reverse derivative is of type given by .
[Softmax cross entropy] The Softmax cross entropy loss is a -parametric map defined by
where is defined componentwise for each class .
We note that, although needs to be a probability distribution, at the moment there is no need to ponder the question of interaction of probability distributions with the reverse derivative framework: one can simply consider as the image of some logits under the function.
[Dot product] In Deep Dreaming (Section 4.3) we often want to focus only on a particular element of the network output . This is done by supplying a one-hot vector as the ground truth to the loss function which computes the dot product of two vectors. If the ground truth vector is a one-hot vector (active at the -th element), then the dot product performs masking of all inputs except the -th one. Note the reverse derivative of the dot product is defined as .
3.3. Learning Rates as Parametric Lenses
After models and loss maps, another ingredient of the learning process are learning rates, which we formalise as follows.
A learning rate on consists of a lens from to where is a terminal object in . Note that the get component of the learning rate lens must be the unique map to , while the put component is a map ; that is, simply a map . Thus we can view as a parametric lens from (with trivial parameter space) and compose it in with a model and a loss map (cf. (7)) to get
{tikzpicture}
|
(8) |
In standard supervised learning in , one fixes some as a learning rate, and this is used to define : is simply constantly , ie., for any .
In supervised learning in , the standard learning rate is quite different: for a given it is defined as the identity function, .
Other learning rate morphisms are possible as well: for example, one could fix some and define a learning rate in by . Such a choice would take into account how far away the network is from its desired goal and adjust the learning rate accordingly.
3.4. Optimisers as Reparameterisations
In this section we consider how to implement gradient descent, ascent, and other gradient updates into our framework. To this aim, note that the parametric lens representing our model (see (5)) outputs a , which represents a change in the parameter space. Now, we would like to receive not just the requested change in the parameter, but the new parameter itself. This is precisely what gradient update accomplishes, when formalised as a lens. We start by describing gradient ascent and gradient descent.
[Gradient ascent] Let be a CRDC. Gradient ascent on is a lens
where is the monoid structure of .555Note that as in the discussion in Section 2.4, we are implicitly assuming that ; we have merely notated them differently to emphasize the different “roles” they play (the first can be thought of as “points”, the second as “vectors”).
Intuitively, such a lens allows one to receive the requested change in parameter and implement that change by adding that value to the current parameter. By its type, we can now “plug” the gradient descent lens above the model in (5) — formally, this is accomplished as a reparameterisation of the parametric morphism , cf. Section 2.1. This gives us Figure 4 (left).
[Gradient ascent in Smooth] In , the gradient ascent reparameterisation will take the output from and add it to the current value of to get a new value of .
[Gradient ascent in Boolean circuits] In the CRDC , the gradient ascent reparameterisation will again take the output from and add it to the current value of to get a new value of ; however, since in is the same as XOR, this can be also be seen as taking the XOR of the current parameter and the requested change; this is exactly how this algorithm is implemented in [59].
[Gradient descent] Let be a CRDC where every monoid is additionally a commutative group.666Since a homomorphism between groups needs to satisfy less equations than a monoid homomorphism, this means that every monoid homomorphism is also a group homomorphism. This in turn means there are no extra conditions we need to impose on such a CRDC equipped with group objects. Gradient descent on is a lens
where .
In this instantiates to usual gradient descent. Gradient ascent in is equal to gradient descent because is its own inverse. Intuitively in there is always only one direction we can move (other than staying still): it’s flipping the bit. Gradient descent and ascent are not usually seen as a lens — but they fit precisely into this picture that we are creating.
Other variants of gradient descent also fit naturally into this framework by allowing for additional input/output data with . In particular, many of them keep track of the history of previous updates and use that to inform the next one. This is easy to model in our setup: instead of asking for a lens , we ask instead for a lens where is some “state” object.
A stateful parameter update consists of a choice of object (the state object) and a lens .
Again, we view this optimiser as a reparameterisation which may be “plugged in” a model as in Figure 4 (right). Let us now consider how several well-known optimisers can be implemented in this way.
[Momentum] In the momentum variant of gradient descent, one keeps track of the previous change and uses this to inform how the current parameter should be changed. Thus, in this case, we set , fix some , and define the momentum lens . by and , where . Note momentum recovers gradient descent when .
In both standard gradient descent and momentum, our lens representation has trivial get part. However, as soon as we move to more complicated variants, this is not anymore the case, as for instance in Nesterov momentum below.
[Nesterov momentum] In Nesterov momentum, one uses the momentum from previous updates to tweak the input parameter supplied to the network. We can precisely capture this by using a small variation of the lens in the previous example. Again, we set , fix some , and define the Nesterov momentum lens by and as in the previous example.
[Adagrad] Given any fixed and , Adagrad [23] is given by , with the lens whose part is . The is where and is the elementwise (Hadamard) product. Unlike with other optimization algorithms where the learning rate is the same for all parameters, Adagrad divides the learning rate of each individual parameter with the square root of the past accumulated gradients.
[Adam] Adaptive Moment Estimation (Adam) [36] is another method that computes adaptive learning rates for each parameter by storing exponentially decaying average of past gradients () and past squared gradients (). For fixed , , and , Adam is given by , with the lens whose part is and whose part is where , , and .
Note that, so far, optimsers/reparameterisations have been added to the wires. In order to change the model’s parameters (Fig. 4). In Section 4.3 we will study them on the wires instead, giving deep dreaming.
3.4.1. Can we compose optimisers?
Even though not explicitly acknowledged in the literature, optimisers can be composed, and this composition plays an important role in settings where deep learning intersects with multivariable optimisation. In such settings we’re interested in their parallel composition, therefore giving a positive answer to the above question.777One might wonder whether optimisers can be composed in sequence as well. The apparent sequential composability of optimisers is unfortunately an artefact of our limited view without dependent types. Parallel composition of optimisers arises out of the fact that optimisers are lenses, and lenses are a monoidal category (Remark 1). In such settings we might have an optimiser of two variables which descends in one direction, and ascends in the other one, for instance.
[Gradient descent-ascent (GDA)] Given objects and , gradient descent-ascent on is a lens
where .
In this gives an optimiser which descends on and ascents on . In this map ends up computing the same update function on both parameter spaces: the one that just flips the underlying bit. This is something that ends up preventing us from modelling GANs in this setting (compare with Ex. 7 where both positive and negative polarity of the optimiser map is needed).
When it comes to optimisers of two parameters, gradient descent-ascent is a particular type of an optimiser that is a product of two optimisers. But not all optimisers can be factored in such a way, much like a general monoidal product doesn’t necessarily have to be cartesian. A good example of this is an optimiser on two parameters called competitive gradient descent ([46]). We don’t explicitly define or use it in this paper, instead inviting the reader to the aforementioned reference for more information.
4. Learning with Parametric Lenses
In the previous section we have seen how all the components of learning can be modeled as parametric lenses. We now study how all these components can be put together to form learning systems. We cover the most common examples of supervised learning, also discussing different kinds of layers, architectures, and techniques such as weight tying and batching. We also consider unsupervised learning, in the form of Generative Adversarial Networks. Finally, in addition to systems that learn parameters, we study systems that learn their inputs. This is a technique commonly known as deep dreaming, and we present it as a natural counterpart of supervised learning of parameters.
Before we describe these systems, it will be convenient to represent all the inputs and outputs of our parametric lenses as parameters. In (8), we see the and inputs and outputs as parameters; however, the wires are not. To view the inputs as parameters, we compose that system with the parametric lens we now define. The parametric lens has the type with parameter space defined by and can be depicted graphically as {tikzpicture} . Composing with the rest of the learning system in (8) gives us the closed parametric lens
{tikzpicture}
|
(9) |
This composite is now a map in from to ; all its inputs and outputs are now vertical wires, ie., parameters. Unpacking it further, this is a lens of type whose map is the terminal map, and whose map is of the type . It can be unpacked as the composite
In the next two sections we consider further additions to the image above which correspond to different types of supervised learning.
4.1. Supervised Learning of Parameters
The most common type of learning performed on (9) is supervised learning of parameters. This is done by reparameterising (cf. Section 2.1) the image in the following manner. The parameter ports are reparameterised by one of the (possibly stateful) optimisers described in the previous section, while the backward wires of inputs and of outputs are discarded. This finally yields the complete picture of a system which learns the parameters in a supervised manner:
Fixing a particular optimiser we again unpack the entire construction. This is a map in from to whose parameter space is . In other words, this is a lens of type whose component is the terminal map. Its map has the type and unpacks to , where
While this formulation might seem daunting, we note that it just explicitly specifies the computation performed by a supervised learning system. The variable represents the parameter supplied to the network by the stateful gradient update rule (in many cases this is equal to ); represents the prediction of the network (contrast this with which represents the ground truth from the dataset). Variables with a tick represent changes: and are the changes on predictions and true values respectively, while and are changes on the parameters and inputs. Furthermore, this arises automatically out of the rule for lens composition (3); what we needed to specify is just the lenses themselves.
We justify and illustrate our approach on a series of case studies drawn from the literature. This presentation has the advantage of treating all these instances uniformly in terms of basic constructs, highlighting their similarities and differences. First, we fix some parametric map in and the constant negative learning rate (Example 3.3). We then vary the loss function and the gradient update, seeing how the map above reduces to many of the known cases in the literature.
[Quadratic error, basic gradient descent] Fix the quadratic error (Example 3.2) as the loss map and basic gradient update (Example 4). Then the aforementioned map simplifies. Since there is no state, its type reduces to , and we have , where .
Note that here is simply a constant, and due to the linearity of the reverse derivative (Def 2.4), we can slide the from the costate into the basic gradient update lens. Rewriting this update, and performing this sliding we obtain a closed form update step
where the negative descent component of gradient descent is here contained in the choice of the negative constant .
This example gives us a variety of regression algorithms solved iteratively by gradient descent: it embeds some parametric map into the system which performs regression on input data - where denotes the input to the model and denotes the ground truth. If the corresponding is linear and , we recover simple linear regression with gradient descent. If the codomain is multi-dimensional, i.e. we are predicting multiple scalars, then we recover multivariate linear regression. Likewise, we can model a multi-layer perceptron or even more complex neural network architectures performing supervised learning of parameters simply by changing the underlying parametric map.
[Softmax cross entropy, basic gradient descent] Fix Softmax cross entropy (Example 3.2) as the loss map and basic gradient update (Example 4). Again the map simplifies. The type reduces to and we have
where . The same rewriting performed on the previous example can be done here.
This example recovers logistic regression, e.g. classification.
[Mean squared error, Nesterov Momentum] Fix the quadratic error (Example 3.2) as the loss map and Nesterov momentum (Example 5) as the gradient update. This time the map does not have a simplified type. The implementation of reduces to
This example with Nesterov momentum differs in two key points from all the other ones: i) the optimiser is stateful, and ii) its map is not trivial. While many other optimisers are stateful, the non-triviality of the map here showcases the importance of lenses. They allow us to make precise the notion of computing a “lookahead” value for Nesterov momentum, something that is in practice usually handled in ad-hoc ways. Here, the algebra of lens composition handles this case naturally by using the map, a seemingly trivial, unused piece of data for previous optimisers.
Our last example, using a different base category , shows that our framework captures learning in not just continuous, but discrete settings too. Again, we fix a parametric map but this time we fix the identity learning rate (Example 3.3), instead of a constant one.
[Basic learning in Boolean circuits] Fix XOR as the loss map (Example 3.2) and the basic gradient update (Example 4). The put map again simplifies. The type reduces to and the implementation to where .
A sketch of learning iteration. Having described a number of examples in supervised learning, we outline how to model learning iteration in our framework. Recall the aforementioned map whose type is (for simplicity here modelled without state ). This map takes an input-output pair , the current parameter and produces an updated parameter . At the next time step, it takes a potentially different input-output pair , the updated parameter and produces . This process is then repeated. We can model this iteration as a composition of the map with itself, as a composite whose type is . This map takes two input-output pairs , a parameter and produces a new parameter by processing these datapoints in sequence. One can see how this process can be iterated any number of times, and even represented as a string diagram.
But we note that with a slight reformulation of the map, it is possible to obtain a conceptually much simpler definition. The key insight lies in seeing that the map is essentially an endo-map with some extra inputs ; it’s a parametric map!
In other words, we can recast the map as a parametric map . Being an endo-map, it can be composed with itself. The resulting composite is an endo-map taking two “parameters”: input-output pair at the time step and time step . This process can then be repeated, with composition automatically taking care of the algebra of iteration.
{tikzpicture}
|
This reformulation captures the essence of parameter iteration: one can think of it as a trajectory through the parameter space; but it is a trajectory parameterised by the dataset. With different datasets the algorithm will take a different path through this space and learn different things.
4.2. Unsupervised Learning
Many kinds of systems that are traditionally considered unsupervised can be recast to their supervised form. One example is a Generative Adversarial Network ([30, 3]). This is a neural network architecture that lies in the centre of the intersection of deep learning and game theory. It is a system of two neural networks trained with “competing” optimisers. One neural network is called the generator whose optimiser is, as usual, tasked with moving in the direction of the negative gradient of the loss. However, the other network — called the discriminator — has an optimiser which is tasked with moving in the positive, i.e. ascending direction of the gradient of the total loss — maximising the loss. The actual networks are wired in such a way (Fig. 6) where the discriminator effectively serves as a loss function to the generator, i.e. being the generator’s only source of information on how to update. Dually, taking the vantage point of the discriminator, the generator serves as an ever changing source of training data.
[GAN] Fix three objects and in (respectively called “the latent space”, “the data space” and “the payoff space”). Then given two parametric morphisms
a generative adversarial network is a morphism where is defined as the composite
Its string diagram representation is shown in Fig. 6 where we see that a GAN consists of two parallel tracks. We will see in Ex. 7 how the first one will be used to process latent vectors, and the second one to process samples from a chosen dataset. Despite the fact that there are two boxes labeled , they are weight tied, making them behave like a singular unit.
We can easily state what the reverse derivative of is in terms of its components:
(10) |
The pair yields a parametric lens of type (Fig. 7), which we interpret as follows.
It consumes two pieces of data, “a latent vector” , a “real” sample from the dataset , in addition to the parameter for the generator and a parameter for the discriminator. What happens then are two independent evaluations done by the discriminator. The first one uses the generator’s attempt of producing a sample from the dataset (the latent vector which was fed into it, producing ) as input to the discriminator, producing a payoff for this particular sample. The second one uses the actual sample from the dataset , producing the payoff .
By choosing as the parametric map representing our supervised learning model, we can differentiate it as in (Fig. 7), and, with the appropriate choice of a loss function, produce the learning system in the literature called Wasserstein GAN ([3]).
[GANs, Dot product, GDA] Fix as the parametric map (Def. 4.2), gradient descent-ascent (Ex. 3.4.1) as the optimiser and dot product (Ex. 3.2) as the loss function. Then becomes a map of type and its implementation reduces to
We can further unpack the label
This brings us to the last step, where by linearity of the backward pass we can extract and components of out:
The ultimate representation is a form in which it makes it possible to see how the update recovers that of Wasserstein GANs. The last missing piece is to note that the supervision labels here are effectively “masks”. Just like in standard supervised learning an input-output pair consisted of an input value and a corresponding label which guided the direction in which the output should’ve been improved, here the situation is the same. Given any latent vector its corresponding “label” is the learning signal which does not change anything in the update, effectively signaling to the generator’s and discriminator’s optimisers that they should descend (minimizing the assigned loss, making the image more realistic next time), and respectively ascend (maximizing the loss, becoming better at detecting when its input is a sample generated by the generator). On the other hand, given any real sample its corresponding “label” is the learning signal which signals to the discriminator’s optimiser that it should do the opposite of what it usually does; it should descend, causing it to assign a lower loss value actual samples from the dataset. In other words, the input-output pairs here are always of the form , making this GAN in many ways a constantly supervised model. Nonetheless, these different “forces” that pull the discriminator in different directions depending on the source of the input, coupled with the ever-changing generated inputs make GANs have intrinsically complex dynamics that are still being studied.
The fact that we were able to encode Wasserstein GAN in this form in our framework is a consequence of its simple formulation of its loss function, which is effectively given by subtraction [3, Theorem 3].
4.3. Deep Dreaming: Supervised Learning of Inputs
We have seen that reparameterising the parameter port with gradient descent allows us to capture supervised parameter learning. In this section we describe how reparameterising the input port provides us with a way to enhance an input image to elicit a particular interpretation. This is the idea behind the technique called Deep Dreaming, appearing in the literature in many forms [40, 38, 22, 51].
(11) |
Deep dreaming is a technique which uses the parameters of some trained classifier network to iteratively dream up, or amplify some features of a class on a chosen input . For example, if we start with an image of a landscape , a label of a “cat” and a parameter of a sufficiently well-trained classifier, we can start performing “learning” as usual: computing the predicted class for the landscape for the network with parameters , and then computing the distance between the prediction and our label of a cat . When performing backpropagation, the respective changes computed for each layer tell us how the activations of that layer should have been changed to be more “cat” like. This includes the first (input) layer of the landscape . Usually, we discard this changes and apply gradient update to the parameters. In deep dreaming we discard the parameters and apply gradient update to the input (see (11)). Gradient update here takes these changes and computes a new image which is the same image of the landscape, but changed slightly so to look more like whatever the network thinks a cat looks like. This is the essence of deep dreaming, where iteration of this process allows networks to dream up features and shapes on a particular chosen image [39].
Just like in the previous subsection, we can write this deep dreaming system as a map in from to whose parameter space is . In other words, this is a lens of type whose map is trivial. Its map has the type and unpacks to
We note that deep dreaming is usually presented without any loss function as a maximisation of a particular activation in the last layer of the network output [51, Section 2.]. This maximisation is done with gradient ascent, as opposed to gradient descent. However, this is just a special case of our framework where the loss function is the dot product (Example 3.2). The choice of the particular activation is encoded as a one-hot vector, and the loss function in that case essentially masks the network output, leaving active only the particular chosen activation. The final component is the gradient ascent: this is simply recovered by choosing a positive, instead of a negative learning rate [51]. We explicitly unpack this in the following example.
[Deep dreaming, dot product loss, basic gradient update] Fix as base category, a parametric map , the dot product loss (Example 3.2), basic gradient update (Example 4), and a positive learning rate . Then the above put map simplifies. Since there is no state, its type reduces to and its implementation to
Like in Example 4.1, this update can be rewritten as
making a few things apparent. This update does not depend on the prediction : no matter what the network has predicted, the goal is always to maximise particular activations. Which activations? The ones chosen by . When is a one-hot vector, this picks out the activation of just one class to maximise, which is often done in practice.
While we present only the most basic image, there is plenty of room left for exploration. The work of [51, Section 2.] adds an extra regularisation term to the image. In general, the neural network is sometimes changed to copy a number of internal activations which are then exposed on the output layer. Maximising all these activations often produces more visually appealing results. In the literature we did not find an example which uses the Softmax-cross entropy (Example 3.2) as a loss function in deep dreaming, which seems like the more natural choice in this setting. Furthermore, while deep dreaming commonly uses basic gradient descent, there is nothing preventing the use of any of the optimiser lenses discussed in the previous section, or even doing deep dreaming in the context of Boolean circuits. Lastly, learning iteration which was described in at the end of previous subsection can be modelled here in an analogous way.
5. Implementation
We provide a proof-of-concept implementation as a Python library — full usage examples, source code, and experiments can be found at [19]. We demonstrate the correctness of our library empirically using a number of experiments implemented both in our library and in Keras [12], a popular framework for deep learning. For example, one experiment is a model for the MNIST image classification problem [37]: we implement the same model in both frameworks and achieve comparable accuracy. Note that despite similarities between the user interfaces of our library and of Keras, a model in our framework is constructed as a composition of parametric lenses. This is fundamentally different to the approach taken by Keras and other existing libraries, and highlights how our proposed algebraic structures naturally guide programming practice
In summary, our implementation demonstrates the advantages of our approach. Firstly, computing the gradients of the network is greatly simplified through the use of lens composition. Secondly, model architectures can be expressed in a principled, mathematical language; as morphisms of a monoidal category. Finally, the modularity of our approach makes it easy to see how various aspects of training can be modified: for example, one can define a new optimization algorithm simply by defining an appropriate lens. We now give a brief sketch of our implementation.
5.1. Constructing a Model with Lens and Para
We model a lens in our library with the Lens class, which consists of a pair of maps fwd and rev corresponding to and , respectively. For example, we write the identity lens as follows:
The composition (in diagrammatic order) of Lens values f and g is written f >> g, and monoidal composition as f @ g. Similarly, the type of maps is modeled by the Para class, with composition and monoidal product written the same way. Our library provides several primitive Lens and Para values.
Let us now see how to construct a single layer neural network from the composition of such primitives. Diagrammatically, we will construct a model consisting of a single dense layer, as in Example 3.1.1 and below.
{tikzpicture}
|
Recall that the parameters of linear are the coefficients of a matrix, and the underlying lens has as its forward map the function , where is the matrix whose coefficients are the parameters, and is the input vector. The bias map is even simpler: the forward map of the underlying lens is simply pointwise addition of inputs and parameters: . Finally, the activation map simply applies a nonlinear function (e.g., sigmoid) to the input, and thus has the trivial (unit) parameter space. The representation of this composition in code is straightforward: we can simply compose the three primitive Para maps as in (6):
Note that by constructing model architectures in this way, the computation of reverse derivatives is greatly simplified: we obtain the reverse derivative ‘for free’ as the map of the model. Furthermore, adding new primitives is also simplified: the user need simply provide a function and its reverse derivative in the form of a Para map. Finally, notice also that our approach is truly compositional: we can define a hidden layer neural network with hidden units simply by composing two dense layers, as follows:
5.2. Learning
Now that we have constructed a model, we also need to use it to learn from data. Concretely, we will construct a full parametric lens as in Figure 2 then extract its map to iterate over the dataset.
By way of example, let us see how to construct the following parametric lens, representing basic gradient descent over a single layer neural network with a fixed learning rate:
{tikzpicture}
|
(12) |
This morphism is constructed essentially as below, where apply_update(, ) represents the ‘vertical stacking’ of atop :
Now, given the parametric lens of (12), one can construct a morphism which is simply the put map of the lens. Training the model then consists of iterating the step function over dataset examples to optimise some initial choice of parameters , by letting .
Note that our library also provides a utility function to construct from its various pieces:
For an end-to-end example of model training and iteration, we refer the interested reader to the experiments accompanying the code [19].
6. Related Work
The work [26] is closely related to ours, in that it provides an abstract categorical model of backpropagation. However, it differs in a number of key aspects. We give a complete lens-theoretic explanation of what is back-propagated via (i) the use of CRDCs to model gradients; and (ii) the construction to model parametric functions and parameter update. We thus can go well beyond [26] in terms of examples - their example of smooth functions and basic gradient descent is covered in our subsection 4.1.
We also explain some of the constructions of [26] in a more structured way. For example, rather than considering the category of [26] as primitive, here we construct it as a composite of two more basic constructions (the and constructions). The flexibility could be used, for example, to compositionally replace with a variant allowing parameters to come from a different category, or lenses with the category of optics [45] enabling us to model things such as control flow using prisms.
One more relevant aspect is functoriality. We use a functor to augment a parametric map with its backward pass, just like [26]. However, they additionally augmented this map with a loss map and gradient descent using a functor as well. This added extra conditions on the partial derivatives of the loss function: it needed to be invertible in the 2nd variable. This constraint was not justified in [26], nor is it a constraint that appears in machine learning practice. This led us to reexamine their constructions, coming up with our reformulation that does not require it. While loss maps and optimisers are mentioned in [26] as parts of the aforementioned functor, here they are extracted out and play a key role: loss maps are parametric lenses and optimisers are reparameterisations. Thus, in this paper we instead use -composition to add the loss map to the model, and 2-cells to add optimisers. The mentioned inverse of the partial derivative of the loss map in the 2 variable was also hypothesised to be relevant to deep dreaming. We have investigated this possibility thoroughly in our paper, showing it is gradient update which is used to dream up pictures. We also correct a small issue in Theorem III.2 of [26]. There, the morphisms of were defined up to an equivalence (pg. 4 of [26]) but, unfortunately, the functor defined in Theorem III.2 does not respect this equivalence relation. Our approach instead uses 2-cells which comes from the universal property of — a 2-cell from to is a lens, and hence has two components: a map and . By comparison, we can see the equivalence relation of [26] as being induced by map , and not a lens. Our approach highlights the importance of the 2-categorical structure of learners. In addition, it does not treat the functor as a primitive. In our case, this functor has the type and arises from applying to a canonical functor existing for any reverse derivative category, not just . Lastly, in our paper we took advantage of the graphical calculus for , redrawing many diagrams appearing in [26] in a structured way.
Other than [26], there are a few more relevant papers. The work of [21] contains a sketch of some of the ideas this paper evolved from. They are based on the interplay of optics with parameterisation, albeit framed in the setting of diffeological spaces, and requiring cartesian and local cartesian closed structure on the base category. Lenses and Learners are studied in the eponymous work of [25] which observes that learners are parametric lenses. They do not explore any of the relevant or CRDC structure, but make the distinction between symmetric and asymmetric lenses, studying how they are related to learners defined in [26]. A lens-like implementation of automatic differentiation is the focus of [24], but learning algorithms aren’t studied. A relationship between category-theoretic perspective on probabilistic modeling and gradient-based optimisation is studied in [49] which also studies a variant of the construction. Usage of Cartesian differential categories to study learning is found in [55]. They extend the differential operator to work on stateful maps, but do not study lenses, parameterisation nor update maps. The work of [27] studies deep learning in the context of Cycle-consistent Generative Adversarial Networks [62] and formalises it via free and quotient categories, making parallels to the categorical formulations of database theory [53]. They do use the construction, but do not relate it to lenses nor reverse derivative categories. A general survey of category theoretic approaches to machine learning, covering many of the above papers, can be found in [50]. Lastly, the concept of parametric lenses has started appearing in recent formulations of categorical game theory and cybernetics [10, 11]. The work of [10] generalises the study of parametric lenses into parametric optics and connects it to game thereotic concepts such as Nash equilibria.
7. Conclusions and Future Directions
We have given a categorical foundation of gradient-based learning algorithms which achieves a number of important goals. The foundation is principled and mathematically clean, based on the fundamental idea of a parametric lens. The foundation covers a wide variety of examples: different optimisers and loss maps in gradient-based learning, different architectures and layer structures, different settings where gradient-based learning happens (smooth functions vs. boolean circuits), adversarial unsupervised learning, and both learning of parameters and learning of inputs (deep dreaming). Finally, the foundation is more than a mere abstraction: we have also shown how it can be used to give a practical implementation of learning, as discussed in Section 5.
There are a number of important directions which are possible to explore because of this work. One of the most exciting ones is a more comprehensive study of neural network architectures through the category-theoretic perspective. Neural network architectures have begun to be studied using category theory adjacent machinery in the context of Geometric Deep Learning ([9]) and Topological Deep Learning ([42]). Recurrent neural networks, in particular, have been been studied in [55], in the context of differential categories and the concept of delayed trace introduced in the same paper. Despite this, a comprehensive categorical study of architectures is still missing in the literature. As first noticed in [41], many architectures such as recurrent and recursive neural network have close parallels to concepts in functional programming such as folds, unfolds and accumulating maps, for instance. As these functional concepts have clear categorical semantics, it is natural to ask whether these categorical semantics can be used to study neural network architectures. We believe the categorical framework presented in this paper can serve as a natural starting point for such a study. Future work includes modelling some classical systems as well, such as the Support Vector Machines [17], which should be possible with the usage of loss maps such as Hinge loss.
In all our settings we have fixed an optimiser beforehand. The work of [2] describes a meta-learning approach which sees the optimiser as a neural network whose parameters and gradient update rule can be learned. This is an exciting prospect since one can model optimisers as parametric lenses; and our framework covers learning with parametric lenses.
Future work also includes using the full power of CRDC axioms. In particular, axioms RD.6 or RD.7, which deal with the behaviour of higher-order derivatives, were not exploited in our work, but they should play a role in modelling some supervised learning algorithms using higher-order derivatives (for example, the Hessian) for additional optimisations. Taking this idea in a different direction, one can see that much of our work can be applied to any functor of the form - does not necessarily have to be of the form for a CRDC . Moreover, by working with more generalised forms of the lens category (such as dependent lenses), we may be able to capture ideas related to supervised learning on manifolds. And, of course, we can vary the parameter space to endow it with different structure from the functions we wish to learn. In this vein, we wish to use fibrations/dependent types to model the use of tangent bundles: this would foster the extension of the correct by construction paradigm to machine learning, and thereby addressing the widely acknowledged problem of trusted machine learning. The possibilities are made much easier by the compositional nature of our framework. Another key topic for future work is to link gradient-based learning with game theory. At a high level, the former takes little incremental steps to achieve an equilibrium while the later aims to do so in one fell swoop. Formalising this intuition is possible with our lens-based framework and the lens-based framework for game theory [28]. Finally, because our framework is quite general, in future work we plan to consider further modifications and additions to encompass probabilistic, non-gradient based, and other forms of non-supervised learning. This includes genetic algorithms and reinforcement learning.
Acknowledgements Fabio Zanasi acknowledges support from epsrc EP/V002376/1. Geoff Cruttwell acknowledges support from NSERC.
References
- [1] S. Abramsky and B. Coecke. A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004., pages 415–425, 2004.
- [2] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In 30th Conference on Neural Information Processings Systems (NIPS), 2016.
- [3] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv:1701.07875, 2017.
- [4] John C. Baez and Jason Erbele. Categories in Control. Theory and Applications of Categories, 30(24):836–881, 2015.
- [5] R. Blute, R. Cockett, and R. Seely. Cartesian Differential Categories. Theory and Applications of Categories, 22(23):622–672, 2009.
- [6] Aaron Bohannon, J. Nathan Foster, Benjamin C. Pierce, Alexandre Pilkiewicz, and Alan Schmitt. Boomerang: Resourceful lenses for string data. SIGPLAN Not., 43(1):407-419, 2008.
- [7] Guillaume Boisseau. String Diagrams for Optics. arXiv:2002.11480, 2020.
- [8] Filippo Bonchi, Pawel Sobocinski, and Fabio Zanasi. The calculus of signal flow diagrams I: linear relations on streams. Inf. Comput., 252:2–29, 2017.
- [9] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Velickovic. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. arXiv:2104.13478, 2021.
- [10] Matteo Capucci, Bruno Gavranovic, Jules Hedges, and E. F. Rischel. Towards foundations of categorical cybernetics. arXiv:2105.06332, 2021.
- [11] Matteo Capucci, Neil Ghani, Jérémy Ledent, and Fredrik Nordvall Forsberg. Translating Extensive Form Games to Open Games with Agency. arXiv:2105.06763, 2021.
- [12] François Chollet et al. Keras. https://keras.io, 2015.
- [13] Bryce Clarke, Derek Elkins, Jeremy Gibbons, Fosco Loregian, Bartosz Milewski, Emily Pillmore, and Mario Román. Profunctor optics, a categorical update. arXiv:2001.07488, 2020.
- [14] J. R. B. Cockett and G. S. H. Cruttwell. Differential Structure, Tangent Structure, and SDG. Applied Categorical Structures, 22(2):331–417, 2014.
- [15] J. Robin B. Cockett, Geoff S. H. Cruttwell, Jonathan Gallagher, Jean-Simon Pacaud Lemay, Benjamin MacAdam, Gordon D. Plotkin, and Dorette Pronk. Reverse derivative categories. In Proceedings of the 28th Computer Science Logic (CSL) conference, 2020.
- [16] Bob Coecke and Aleks Kissinger. Picturing Quantum Processes: A First Course in Quantum Theory and Diagrammatic Reasoning. Cambridge University Press, 2017.
- [17] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995.
- [18] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. arXiv:1511.00363, 2016.
- [19] Anonymous CRCoauthors. Numeric Optics: A python library for constructing and training neural networks based on lenses and reverse derivatives. https://github.com/anonymous-c0de/esop-2022, 2022.
- [20] Geoffrey S. H. Cruttwell, Bruno Gavranovic, Neil Ghani, Paul W. Wilson, and Fabio Zanasi. Categorical foundations of gradient-based learning. In ESOP, volume 13240 of Lecture Notes in Computer Science, pages 1–28. Springer, 2022.
- [21] David Dalrymple. Dioptics: a common generalization of open games and gradient-based learners. SYCO7, 2019.
- [22] Alexey Dosovitskiy and Thomas Brox. Inverting convolutional networks with convolutional networks. arXiv:1506.02753, 2015.
- [23] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.
- [24] Conal Elliott. The simple essence of automatic differentiation (differentiable functional programming made easy). arXiv:1804.00746, 2018.
- [25] Brendan Fong and Michael Johnson. Lenses and learners. In Proceedings of the 8th International Workshop on Bidirectional transformations (Bx@PLW), 2019.
- [26] Brendan Fong, David I. Spivak, and Rémy Tuyéras. Backprop as functor: A compositional perspective on supervised learning. In Proceedings of the Thirty fourth Annual IEEE Symposium on Logic in Computer Science (LICS 2019), pages 1–13. IEEE Computer Society Press, 2019.
- [27] Bruno Gavranovic. Compositional deep learning. arXiv:1907.08292, 2019.
- [28] Neil Ghani, Jules Hedges, Viktor Winschel, and Philipp Zahn. Compositional game theory. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS ’18, page 472-481, 2018.
- [29] Dan R. Ghica, Achim Jung, and Aliaume Lopez. Diagrammatic Semantics for Digital Circuits. arXiv:1703.10247, 2017.
- [30] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. arXiv:1406.2661, 2014.
- [31] Andreas Griewank and Andrea Walther. Evaluating derivatives: principles and techniques of algorithmic differentiation. Society for Industrial and Applied Mathematics, 2008.
- [32] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
- [33] Jules Hedges. Limits of bimorphic lenses. arXiv:1808.05545, 2018.
- [34] C. Hermida and R. D. Tennent. Monoidal indeterminates and categories of possible worlds. Theor. Comput. Sci., 430:3-22, 2012.
- [35] Michael Johnson, Robert Rosebrugh, and R.J. Wood. Lenses, fibrations and universal translations. Mathematical structures in computer science, 22:25–42, 2012.
- [36] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
- [37] Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998.
- [38] Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. arXiv:1412.0035, 2014.
- [39] Alexander Mordvintsev, Christopher Olah, and Mike Tyka. Inceptionism: Going Deeper into Neural Networks, 2015.
- [40] Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. arXiv:1412.1897, 2014.
- [41] Christopher Olah. Neural Networks, Types, and Functional Programming – colah’s blog, 2015.
- [42] Mathilde Papillon, Sophia Sanborn, Mustafa Hajij, and Nina Miolane. Architectures of Topological Deep Learning: A Survey on Topological Neural Networks. arXiv:2304.10031, 2023.
- [43] Robin Piedeleu and Fabio Zanasi. An introduction to string diagrams for computer scientists. arXiv:2305.08768, 2023.
- [44] B.T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1 – 17, 1964.
- [45] Mitchell Riley. Categories of optics. arXiv:1809.00738, 2018.
- [46] Florian Schäfer and Anima Anandkumar. Competitive Gradient Descent, June 2020. arXiv:1905.12103, 2020.
- [47] Peter Selinger. Control categories and duality: on the categorical semantics of the lambda-mu calculus. Mathematical Structures in Computer Science, 11(02):207–260, 4, 2001.
- [48] Sanjit A. Seshia and Dorsa Sadigh. Towards verified artificial intelligence. arXiv:1606.08514, 2016.
- [49] Dan Shiebler. Categorical Stochastic Processes and Likelihood. Compositionality, 3(1), 2021.
- [50] Dan Shiebler, Bruno Gavranović, and Paul Wilson. Category Theory in Machine Learning. arXiv:2106.07032, 2021.
- [51] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv:1312.6034, 2014.
- [52] The Royal Society. Explainable AI: the basics - policy briefing, 2019.
- [53] David I. Spivak. Functorial data migration. arXiv:1009.1166, 2010.
- [54] David I. Spivak. Generalized Lens Categories via functors $\mathcal{C}^{\rm op}\to\mathsf{Cat}$, March 2022. arXiv:1908.02202, 2022.
- [55] David Sprunger and Shin-ya Katsumata. Differentiable causal computations via delayed trace. In Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS ’19. IEEE Press, 2019.
- [56] Albert Steckermeier. Lenses in functional programming. Preprint, available at https://sinusoid.es/misc/lager/lenses.pdf, 2015.
- [57] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 1139–1147, Atlanta, Georgia, USA, 17–19 Jun 2013, 2013.
- [58] D. Turi and G. Plotkin. Towards a mathematical operational semantics. In Proceedings of Twelfth Annual IEEE Symposium on Logic in Computer Science, pages 280–291, 1997.
- [59] Paul Wilson and Fabio Zanasi. Reverse derivative ascent: A categorical approach to learning boolean circuits. In Proceedings of Applied Category Theory (ACT), 2020.
- [60] Paul Wilson and Fabio Zanasi. An axiomatic approach to differentiation of polynomial circuits. Journal of Logical and Algebraic Methods in Programming, 135:100892, 2023.
- [61] Paul W. Wilson and Fabio Zanasi. Categories of differentiable polynomial circuits for machine learning. In ICGT, volume 13349 of Lecture Notes in Computer Science, pages 77–93. Springer, 2022.
- [62] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv:1703.10593, 2017.
Appendix A More details on Parametric Categories
As mentioned in the main text, coherence rules in combining the two operations in (2) just work as expected, in the sense that these diagrams can be ultimately ‘compiled’ down to string diagrams for monoidal categories. For example, given maps , with reparametrisations , , one could either first reparametrise and separately and then compose the results (below left), or compose first then reparametrise jointly (below right):
{tikzpicture}
{tikzpicture}
|
(13) |
As expected, translating these two operations into string diagrams for monoidal categories yield equivalent representations of the same morphism.
(14) |
Remark 4.
There is a 2-categorical perspective on , which we glossed over in this paper for the sake of simplicity. In particular, the reparametrisations described above can also be seen as equipping with 2-cells, giving a 2-categorical structure on . This is also coherent with respect to base change: if and are strict symmetric monoidal categories, and a lax symmetric monoidal functor, then there is an induced 2-functor which agrees with on objects. This 2-functor is straightforward: for a 1-cell , it applies to and and uses the (lax) comparison to get a map of the correct type. We will see how this base change becomes important when performing backpropagation on parametric maps (Eq. 5)
Lastly, we mention that inherits the symmetric monoidal structure from and that the induced 2-functor respects that structure. This will allow us to compose neural networks not only in series, but also in parallel. For more detail on alternative viewpoints on the construction, including how it can be viewed as the Grothendieck construction of a certain indexed category, see [10].
Appendix B Background on Cartesian Reverse Differential Categories
Here we briefly review the definitions of Cartesian left additive category (CLAC), Cartesian reverse differential category (CRDC) and additive and linear maps in these categories. Note that in this appendix we follow the convention of [15] and write composition in diagrammatic order by juxtaposition of terms (rather than a semicolon) to shorten the form of many of the expressions.
A category is said to be Cartesian when there are chosen binary products , with projection maps and pairing operation , and a chosen terminal object , with unique maps to the terminal object.
A left additive category [5, Definition 1.1.1] (CLAC) is a category such that each hom-set has commutative monoid structure, with addition operation and zero maps 0, such that composition on the left preserves the additive structure: for any appropriate , and .
A map in a CLAC is additive if it has the property that it preserves additive structure by composition on the right: for any maps , , and .
[Additive in second component, (compare [5, Lemma 1.2.3])] A morphism is additive in the variable if it is an additive morphism of type in the cartesian left-additive category , where is the coKleisli category of the coreader comonad888There are a few other terms for this. One of them is “the writer comonad”, though this is often confused with the writer monad which additionally necessitates a monoid structure on . It’s also called reader comonad, because of duality to reader monad, and also “product comonad” or “environment comonad”..
A Cartesian left additive category [5, Definition 1.2.1] is a left additive category which is Cartesian and such that all projection maps are additive.
We call the category whose objects are cartesian left-additive categories and whose morphisms are cartesian left-additive functors (functors which preserve products and commutative monoid structure on objects ([5, Definition 1.3.1])).
Lemma 5.
Let and be cartesian left-additive categories, and a a cartesian left-additive functor. Let be an additive morphism in . Then is also additive.
The central definition of [15] is the following:
A Cartesian reverse differential category (CRDC) is a Cartesian left additive category which has, for each map in , a map
satisfying seven axioms:
[RD.1] and .
[RD.2] and .
[RD.3] , , and .
[RD.4] For a tupling of maps and , the following equality holds:
And if is the unique map to the terminal object, .
[RD.5] For composable maps and ,
[RD.6]
[RD.7]
(where is the map that exchanges the middle two variables).
As discussed in [15], these axioms correspond to familiar properties of the reverse derivative:
-
•
[RD.1] says that differentiation preserves addition of maps, while [RD.2] says that differentiation is additive in its vector variable.
-
•
[RD.3] and [RD.4] handle the derivatives of identities, projections, and tuples.
-
•
[RD.5] is the (reverse) chain rule.
-
•
[RD.6] says that the reverse derivative is linear in its vector variable.
-
•
[RD.7] expresses the independence of order of mixed partial derivatives.
We proceed to prove the following theorem in three steps. \LensFunctorCLAC*
The first step is formally defining the category . {defi} Let be a cartesian left-additive category. Then is a wide subcategory of where
Compare this with the defintion of via the Grothendieck construction ([54, Prop. 3.10]) where
The second step is showing this category is cartesian left-additive.
Proposition 6.
The category is cartesian left-additive.
Proof B.1.
We need to equip with a commutative monoid on every object in a way that’s compatible with the cartesian structure.999We don’t need to show that this monoid is unique, just that it exists and can be canonically defined. That is, for every object we need to provide two morphisms:
-
•
Unit . This is a lens whose forward map we set as the zero map and the backward map as the delete .
-
•
Multiplication . This is a lens whose forward map we set as sum and the backward map as copy, i.e. .
Additionally, these morphisms need to obey the monoid laws. This can be verified by routine.
This defines the action of on objects of . Action on morphisms is defined below.
Proposition 7.
Let be a cartesian left-additive functor. This induces a cartesian left-additive functor between the corresponding categories of lenses:
(15) |
where .
Proof B.2.
We need to prove that is a cartesian left-additive functor. To prove it is a functor, we need to:
-
•
Define its action on objects and morphisms. We have done this in Prop. 7 itself;
-
•
Prove additivity of . This follows from Lemma. 5;
-
•
Prove identities are preserved. The identity in the domain gets mapped to . By preservation of identities and products of this is equal to the identity map on .
-
•
Prove composition is preserved. This can be be by routine, albeit tedious calculation.
To prove that it is additionally cartesian, we need to show that the image of every comonoid is also a comonoid, and that all maps preserve comonoids. We can understand the first part in terms of actions on the counit and comultiplication of the comonoid.
-
•
Counit. The action on the counit unpacks to the pair . By preservation of terminal and additive maps of this morphism is equal to the counit of .
-
•
Comultiplication. The action on the comultiplication unpacks to . By ’s preservation of products and additive morphisms this morphism is equal to the comultiplication of .
It is routine to show it obey the corresponding laws and form a comonoid.
For the second part we need to show that the image of every lens preserves these comonoids. For the forward part this is true because preserves products. For the backwards part this is true because is left-additive.
Lastly, we need to prove that this functor is additionally left-additive. This means that it preserves the monoid of every object. We unpack the action of on the unit and multiplication below.
-
•
Unit. The action on the unit unpacks to the pair . By preservation of additive and terminal maps of this morphism is equal to the unit of ;
-
•
Multiplication. The action on the multiplication unpacks to the pair . By preservation of coadditive maps and products of this morphism is equal to the multiplication of .
Seeing as these monoids in the codomain are of the same form as those in the domain, it is routine to show that they obey the monoid laws. This concludes the proof that is a cartesian left-additive functor.
What remains to show is that preserves identities and composition, which follows routinely, concluding the proof of (Thm. 2).
This functor has additional structure — it is copointed.101010Despite this the functor does not have the comonad structure, for similar reasons that tangent categories do not.
Proposition 8 (Copointed structure of ).
There is a natural transformation which on components assigns to cartesian left-additive category a cartesian left-additive functor which forgets the backward part of the lens.
*
Proof B.3.
We have shown how a putative reverse derivative combinator arises out of the functor . What remains to prove is that this combinator satisfies the first five axioms of a CRDC.
-
(1)
Additivity of reverse differentiation. This is recovered by preserving left-additive structure.
-
(2)
Additivity of reverse derivative in the second variable. This is recovered by definition of — the backward maps are additive in the 2nd component.
-
(3)
Coherence with identities and projections. Coherence with identities is recovered by preservation of identities of the functor , where for every , . Coherence with projections is recovered by preserving cartesian structure.
-
(4)
Coherence with pairings. Recovered by preserving cartesian structure.
-
(5)
Reverse chain rule. Recovered by functoriality of .