--

Educating Artificial Intelligentsia

Theory of Graphs

Full Disclosure: As much as I love to be the main contradictor (cf. Clementino and Picado, 2007/2008, p. 6), I completely agree with Stilgoe (2023). As such, what follows may not be to the liking of you know who.

Let us begin at the beginning: Turing (1950), the godfather of AI, did not define ‘thinking’ and ‘intelligence’, along with many other concepts of significance. Here we discuss a scientific method used to define, which is not unrelated to Weizenbaum test for AI framed in terms of ‘good’ (Stilgoe, 2023).

Before we begin to address the how-to of defining, let’s look at the excuse Turing deployed to evade the very exercise of defining, i.e., subjectivity (ibid). For now, it suffices to recognize subjectivity as objectivity, albeit qualified, as in positional objectivity, which is not a newfound enlightenment, but can be traced to Maxwell (in the context of planned perception of science, wherein varying a doctrine reveals different phenomena; see Lawvere, 2007; Posina, 2020).

Returning to the beginning, thinking is what thinking does (functional definition). One immediate problem with functional definitions, as Stephen Jay Gould pointed out in the context of academic abuses of the theory of evolution, is that, as an illustration, a pen can used to scratch one’s back, but it makes no sense to define ‘pen’ in terms of scratching. So, we refine the method of defining: pen is what pen is good for, or, equivalently, pen is what wouldn’t be but for pen, which leads to a definition of ‘pen’ in terms of writing (while excluding scratching). This ‘good for’ method is used to define mathematical constructs. For example, SUM is a whole that is completely determined by its parts (Lawvere and Rosebrugh, 2003, pp. 26–31) and TRUE is a distinguished point of a totality of truth values that parameterizes all parts of every object of the corresponding category of objects (e.g., sets, dynamical systems, functions, and graphs; Lawvere and Schanuel, 2009, pp. 334–357). It is this universal mapping property definition of subobject classifier that is the basis of the all too familiar calculation of the number of subsets of a set A using the formula 2^|A|, where the base number 2 is the size of the totality of truth values, i.e., the set {false, true}, in the category of sets, while the exponent |A| denotes the size of the set A. Introduced by Samuel (1948), this universal mapping property definition of an object of a category in terms of its relations to all objects of the category is a standard and useful method of definition in mathematical sciences.

Along these lines, we can work on defining AI, beginning with ‘intelligence’. Intelligence is what intelligence is good for. Equivalently, human intelligence is that which wouldn’t be but for intelligence. Sun and moon would be whatever/wherever they are even in the absence of human intelligence (possibly represented differently assuming humanity with consciousness-sans-intelligence). However, but for human intelligence there wouldn’t be science: a hallmark of intelligence! As is our wont, reminiscent of a mother celebrating her daughter learn, we all are natural-born learners struggling to transform our procedural knowledge into declarative understanding needed to sustain our unwavering commitment to make sense of the blooming buzzing confusion (it’s not all that confusing unless one believes particulars make us wiser, a’ la James, 1902/2009, p. 5) we are suspended in. As a litmus test of our understanding, we try to teach people and get things do what we can. AI, with Minsky et al. getting computers to prove theorems, ended up serving as a launching pad for wishful thinking (oblivious to reality). This is somewhat perplexing given that the pioneers of AI, soon after getting computer programs to prove theorems, were sensible enough to place abstraction of mathematical theories (with theorems as statements as in sentences in a story) on top of their to-do list. One (plausible) reason that this got lost in the juvenile selfie-infatuation of AI (not only in the contemporary wave of fear, but also in its earlier avatar: 90s wave of washing machines with neural networks; see also Geman and Geman, 2016) has a lot to do with the disconnect between computer science and mathematics.

In the spirit of reconnecting computer science and mathematics for the express purpose of breathing life anew into AI, back in the early 60’s there was a mathematical advance, an advance on par with Newtonian mechanism in physics and Darwinian evolution in biology. A mathematical theory, prior to F. William Lawvere’s Functorial Semantics of Algebraic Theories (Lawvere, 1963/2004/2013), was a list of statements, which together determined whether a given object is this or that. So, a theory of a universe of discourse, say, the category of graphs (consisting of dots and arrows), had no choice but to leave the given universe for one, with no readily discernible kinship with graphs, of arbitrary symbols, words, and sentences, i.e., language. Following Lawvere’s functorial semantics, a theory of a given category of objects is a [sub]category with their basic properties as objects and mutual determinations of properties as morphisms (Lawvere, 2003; see also Posina, Ghista, and Roy, 2017). Simply put, in the words of my good friend Dr. Salk, a theory of cats is a cat. So is the case with the category of graphs, whose theory is an graph (displayed on top this post). Note that a theory of a category of objects is adequate to completely characterize every object and tell apart morphisms of the category (e.g., a singleton set 1 = {*} is adequate to list all the elements of every set of the category of sets, since elements of any set A are in 1–1 correspondence with its points a: 1 → A; it’s also adequate to tell apart functions, i.e., given a parallel pair of functions f, g: A → B, which could be equal, if there is an element ‘a’ at which f(a) is not equal to g(a), then f is not equal to g). Along with the functorial semantics of Lawvere, sketches of Bastiani and Ehresmann (1972), and Grothendieck’s descent (see Clementino and Picado, 2007/2008, p. 15) contributed to the monumental development of our mathematical understanding of mathematics, wherein the relationship between particulars, theory, models, presentations, and doctrine is spelled out in a spellbinding display of science: ever-proper alignment of reason with experience.

Now, given that science figures prominently in the definition of AI, it seems sensible and reasonable to get AI to do science. In doing so, we also get to demystify science (cf. Sarewitz, 2017) and establish that the effectiveness of mathematics in natural sciences, with ‘natural’ understood as ‘Becoming consistent with Being’, is within the reach of reason (cf. Wigner, 1960; see also Posina and Roy, 2022, 2023). More explicitly, we begin with statistical abstraction of the universal mapping property definition of SUM (e.g., 1 + 1 = 2; https://playinmath.wordpress.com/2022/07/23/letting-students-discover-the-definition-of-sum/) with the objective of recreating the architecture of mathematical sciences (cf. Lawvere, 2021).

In closing, along with this or that test (cf. Stilgoe, 2023), what we need is a renewed commitment to sensibility and reason (notwithstanding nature.com talking in tongues: belief, faith, oracles, and pronouncements; see Nature Editorial, 2016), keeping in mind that reason depends on the universe of discourse (cf. objective logic; see Lawvere, 1994, 2003; Lawvere and Rosebrugh, 2003, pp. 193–212, 239–240).

References

Bastiani, A. and Ehresman, C. (1972) Categories of sketched structures, Topology and Differential Geometry Notebooks 13(2): 104–214. http://www.numdam.org/item/CTGDC_1972__13_2_104_0.pdf

Clementino, M. M. and Picado, J. (2007/2008) An interview with F. Willaim Lawvere, Bulletin of the International Center for Mathematics. http://www.mat.uc.pt/~picado/lawvere/interview.pdf

Geman, D. and Geman, S. (2016) Science in the age of selfies, Proceedings of the National Academy of Sciences USA 113(34): 9384–9387. https://www.pnas.org/doi/10.1073/pnas.1609793113

James, W. (1902/2009) The Varieties of Religious Experience: A Study in Human Nature. https://csrs.nd.edu/assets/59930/williams_1902.pdf

Lawvere, F. W. (1963) Functorial semantics of algebraic theories, Proceedings of the National Academy of Sciences 50(5): 869–872. https://www.pnas.org/doi/10.1073/pnas.50.5.869

Lawvere, F. W. (1994) Tools for the advancement of objective logic: Closed categories and toposes, The Logical Foundations of Cognition, Oxford University Press, pp. 43–56. https://github.com/mattearnshaw/lawvere/blob/master/pdfs/1994-tools-for-the-advancement-of-objective-logic-closed-categories-and-toposes.pdf

Lawvere, F. W. (2003) Foundations and applications: Axiomatization and education, The Bulletin of Symbolic Logic 9(2): 213–224. https://github.com/mattearnshaw/lawvere/blob/master/pdfs/2003-foundations-and-applications-axiomatization-and-education.pdf

Lawvere, F. W. (2004) Functorial semantics of algebraic theories and some algebraic problems in the context of functorial semantics of algebraic theories, Reprints in Theory and Applications of Categories 5: 1–121. http://www.tac.mta.ca/tac/reprints/articles/5/tr5.pdf

Lawvere, F. W. (2007) Axiomatic cohesion, Theory and Applications of Categories 19(3): 41–49. http://www.tac.mta.ca/tac/volumes/19/3/19-03.pdf

Lawvere, F. W. (2013) Fifty years of functorial semantics, Celebrating Bill Lawvere and Fifty Years of Functorial Semantics, Union College Mathematics Conference, NY. https://www.math.union.edu/~niefiels/13conference/Web/Slides/Fifty_Years_of_Functorial_Semantics.pdf

Lawvere, F. W. (2021) Toposes generated by codiscrete objects in combinatorial topology and functional analysis, Reprints in Theory and Applications of Categories 27: 1–11. http://www.tac.mta.ca/tac/reprints/articles/27/tr27.pdf

Lawvere, F. W. and Rosebrugh, R. (2003) Sets for Mathematics. http://assets.cambridge.org/052180/4442/sample/0521804442ws.pdf

Lawvere, F. W. and Schanuel, S. H. (2009) Conceptual Mathematics. http://assets.cambridge.org/97805218/94852/excerpt/9780521894852_excerpt.pdf

Nature Editorial (2016) Digital intuition, Nature 529: 437. https://www.nature.com/articles/529437a

Posina, V. R. (2020) Hard, harder, and the hardest problem: The society of cognitive selves, Tattva — Journal of Philosophy 12(1): 75–92. https://philarchive.org/archive/POSHHA-2

Posina, V. R., Ghista, D., and Roy, S. (2017) Functorial semantics for the advancement of the science of cognition, Mind & Matter 15(2): 161–184. https://doi.org/10.5281/zenodo.3924392

Posina, V. R. and Roy, S. (2022) Isbell conjugacy for developing cognitive science. https://doi.org/10.5281/zenodo.7496454

Posina, V. R. and Roy, S. (2023) Mind-matter problem solved! https://doi.org/10.5281/zenodo.7743205

Samuel, P. (1948) On universal mappings and free topological groups, Bull. AMS 54: 591–598. https://www.ams.org/journals/bull/1948-54-06/S0002-9904-1948-09052-8/S0002-9904-1948-09052-8.pdf

Sarewitz, D. (2017) Kill the myth of the miracle machine, Nature 547: 139. https://doi.org/10.1038/547139a

Stilgoe, J. (2023) We need a Weizenbaum test for AI, Science 381(6658). https://www.science.org/doi/full/10.1126/science.adk0176

Turing, A. M. (1950) Computing machinery and intelligence, Mind LIX(236): 433–460. https://academic.oup.com/mind/article/LIX/236/433/986238

Wigner, E. (1960) The unreasonable effectiveness of mathematics in the natural sciences, Communications in Pure and Applied Mathematics XIII(I): 1–14. https://doi.org/10.1002/cpa.3160130102

--

--

Posina Venkata Rayudu Venkata Posina Rayudu Poison

Qualitylessness, like meaningless symbols that propelled arithmetic to variable algebra, is indispensable for the graduation of geometry into variable geometry.