I used ChatGPT for the first time a couple of days after it was released. I was going for dinner to a friend’s house and I must have driven everybody crazy when I kept rerouting the conversation to “the chatbot,” a noun that I remember felt really weird then. Anyway, I don’t remember what had done with it, but afterwards I asked it to write a few formal letters and e-mails for me, and things like that, but for a long time it was not clear what I could actually use it for. Somebody told me that they asked it to plan their meals. Somebody else talked about discussing with it about their life. Well, I am probably too proud for that, and I guess that the thing didn’t inspire a lot of confidence in me because when I tried to get it to do some math, it was invariably a flop. The math thing has not changed, although I know that there are people who use it profitably, even in areas I might care about. But when I try, it seems to always be useless. It might be that I don’t know how to ask, or that I ask questions that are too vague, or that I am just using the wrong version of the thing, or that it is better for some kind of math than for others. Be that as it may, so far AI has been of no use for me feeding my cats. Although now that I think about it, I can’t be so sure. I am namely working on something about geodesics on hyperbolic surfaces, and AI might have been working for me. I don’t know. The point is that what we are doing has theorems and lemmas, but visualization is an integral part of it all, and my collaborator has been writing programs that do amazing things, and amazingly fast. He might be using AI for that. But when it comes to actually doing math myself, AI has been of no use.

By now, I have however spent a lot of time with AI, doing something that would have been impossible without it. Indeed, when I decided to get rid of my Google site, I needed help to get my own page running. Then I decided to leave Facebook and start this blog, and again I needed help to set it up. The same when I decided to have a site with recipes I cook. Since I was at it, I also decided to make a couple of apps for my personal use, and again I needed help. In a parallel universe, I could have just learned on my own enough HTML, CSS, and JavaScript, and I actually got somewhere under a by-now decent layer of dust a very thick book about all that. Instead, I have been “vibecoding,” a word I just learned a few days ago. As I see it, “vibecoding” amounts to telling some AI agent—because of vaguely nationalistic reasons, I use Mistral’s—what I want, trying whatever it suggests, and then asking again. The same process for dealing with writing a script to do something, setting up a cloudbase database, creating an Android app, understanding how to run some version of Linux on my tablet—dealing with the file structure is a nightmare—or figuring out how to deal with a server without being able to SSH.

Given the starting point, it all took a while, let me tell you. I guess that this is partly due to the fact that the thing sometimes does something silly, for example deleting part of the code while keeping a complete poker face, meaning that after every modification you have to test it all. Now, the thing is really polite, and when you point out a mistake, it profusely apologizes and solves the problem, but you’ve got to check. But it would be nonsensical to blame the thing for the hours upon hours that I needed for any of the things I mentioned above. It is me who has evidently been the weak link here. I mean, having never worked with a database, or having never needed a website with nothing other than the most static of contents, I just had no clue how to organize things. If I were to start now from scratch, I would have much clearer ideas of what I wanted, and would be able to ask way more efficiently. Altogether, the structure would make much more sense. But it took hours upon hours.

By now I know how to underline, boldface, or italicize text, and know a few other HTML commands I am not going to demonstrate. I also have learned how to play a bit with CSS, but that is pretty easy. And, although I am unable to write a script myself, I can read those that the thing writes for me well enough to understand what is going on. Basically enough to locate where I think something goes wrong and ask the thing to repair it. But I would be unable to write an HTML site from scratch. In some sense, that is not so surprising. At the end of the day, although I have been using it for at least 30 years, I would not know how to start a LaTeX file from scratch: I always copy an old file and just modify it as needed, and this is why the macros I use are still pretty much German-based, to the amusement and annoyance of all my coauthors. Similarly, although over time I have written a few thousand lines of code, every time I have to write a Python program I feel that I have forgotten everything I ever knew. So, I have learned something, but I remain and will remain completely useless. For this and for much else. But for this, AI is priceless.

Now there are apparently things like Claude Code (and there seem to be about 200 alternatives) that would brutally speed up the whole process. I might try them, and might be again amazed, but I am a bit skeptical. Not because of their capabilities, which I am sure are impressive, but because it seems to me that for me it has some value that I understand how Shoddy Vegetarian works. Value for me. Just for me. Probably the thing could do a more professional job if I left it alone and didn’t introduce noise and mistakes, but at the end, the whole thing would have less value for me. A bit like a homemade cake having sometimes more value than one produced in an excellent patisserie.

Probably the same can be said about these posts I write. I am sure that if I were to tell AI that I think that the world will have to get used to living with less energy than now, elaborating minimally on why I think so, asking it to do some research for me, and telling it to write a blog post about it, then I would end up with a pretty well-written text, with solid facts and explanations. It would be probably more impartial than what I write myself about it. It would surely be more professional. There would be fewer “anyways” and “nows.” But it would be incredibly more boring for me. And it would not be me. Me with my quirks, prejudices, obsessions, and so on. There is no way I would be writing this using AI. A high school assignment, yes, but not something I write because I want to write it.

Also, I am guessing that the 2003 readers I have—I know of 3, my mother and two others, but there must be thousands—do read this because it must be clear that it is me who is writing it. I am pretty sure that the thing would not refer to the being as the being. Other than that, it would not know that I have a scythe in my field for you to come cut grass if you want. Reading this is the equivalent of getting a homemade cheesecake: the top is cracked and too brown, and the dough is probably undercooked. But it is homemade. And, although some homemade cakes are truly atrocious, being homemade has some value. A friend said that she might feed my posts to AI to see what it produces. I would be a bit afraid, but mostly I would be curious. Afraid because I am sure that it would feel like if the thing were making fun of me. Although it would be my friend the one making fun of me. But although I am sure that the thing would catch up on my mannerisms, I am confident that it would be a bit of a caricature.

At the end of the day, the thing would probably not think of including into a post about three books by Deborah Levy a boring, totally unrelated rant about the practicalities of Roman galleys. But I would be curious. But even if I knew that the thing would write exactly what I would write, I would not want to use it.

Now, when I make a cheesecake—one of the few sweet things I cook—I try to do a professional job. And I follow somebody else’s recipe, often mixing several of them. In the same way, when I write about politics or money or anything, I am basically being a parrot, repeating things I have heard. A brainwashed parrot, because I believe what I write, but a human parrot, not an AI parrot. A parrot that has fun writing this. Indeed, writing this has value for this particular parrot, incomparably more than it would have if the parrot were just asking AI to do it for him. And I believe that the quirks, the English mannerisms, the undercooked crust, the random choice of topics, that all of that has some value for the 3003 readers of this post. Did you notice how I just needed a few paragraphs to win over another 1000 readers?

I actually believe that something similar happens, or will happen, in math and much else. At the end of the day, a math paper is largely a story, and the real insight is often to figure out that some character is central. And name it. Other than that, most math papers (all?) are based on a few simple observations out of which one gets a lot of mileage. The classification of quadratic forms over the reals, something one takes a long time to discuss in a linear algebra class, is based on the idiotic observation that if the restriction of a bilinear form to a subspace V is non-degenerate, then the ambient space is the direct sum of V and its orthogonal complement. The proof of that lemma is shorter than the statement. And it is from there, and from the idea of “quadratic form,” where it all comes. Besides that lemma, the classification of quadratic forms is a story. Without wanting to compare them with such a central piece of math, it is the same with my papers. Whenever I think about something, I am always thinking of a talk I will maybe give about it. When writing a paper, I always start by the introduction. The talk changes as the to-be-proved results change, but I am always thinking of that talk. We tried to prove that something was a spine of SLnℝ/SOn—whatever that is—and ended up writing a paper called The spine which was no spine. That paper, as basically all of them, is a story based on an observation and a lemma. Maybe at some point AI can help prove the lemma. Or can just prove the lemma, full stop. And maybe it can write the paper. But I doubt that AI will come up with the story that is to be told. Or to end the introduction of a paper expressing the both authors’ gratitude to their homeland for the beauty of its villages, where their respective mothers found themselves at the moment of writing. AI would also not replace the aseptic Q.E.D. at the end of a proof with the expression and Bob’s your uncle, which has pretty much the same meaning but much more of an earthy British feeling to it. More seriously, how would AI come up with the ideas and names of pleated surfaces, train tracks, or foldings, not to speak about the neutered space. The story is what makes most math papers worth reading, and the mannerisms are often the difference between pages of drab and something you can actually read. As long as math is a human activity, something that humans do because humans find it interesting, because it gives them pleasure and fulfillment, my job will be safe.