Technology, Philosophy and Responsibility - A Rejoinder to a Recent NYT Opinion

“A.I. Is on Its Way to Something Even More Remarkable Than Intelligence New York Times Online, Nov. 8, 2025”

www.nytimes.com/2025/11/0…

By Barbara Gail Montero. Dr. Montero is a philosophy professor who writes on mind, body and consciousness.

We should not allow ourselves to be unduly stunned by the sensationalist claim that: “Not long ago, A.I. became intelligent.” By the standards of Alan Turing’s theoretical test, computers became intelligent long ago - back in 1966, to be precise. This was the year computer scientist Joseph Weizenbaum created ELIZA, a chatbot programmed according to a script derived from Rogerian psychotherapy - a method in which the therapist repeats the patient’s comments back to them - as an ironic display of the limits of what Weizenbaum termed ‘computer power’ in contrast to ‘human judgement.’ Yet when Weizenbaum asked his secretary to test ELIZA by engaging with it in conversation, he was bemused when after a couple of exchanges the secretary asked him to leave the room and give her and ELIZA privacy. Soon, psychiatrists were clamoring for ELIZAs to be installed in hospitals across the USA, leaving a now worried Weizenbaum to ponder what exactly these mental health professionals thought they were doing if they felt their work could be carried out as effectively by a pre-programmed script.

What the ELIZA experiment unexpectedly revealed was how incredibly low the Turing Test bar really is. We humans were already more than willing to credit machines with intelligence in 1966; the only difference between ELIZA and ChatGPT 4.5 is the amount of computer power behind them. To paraphrase Professor Montero, as far as we can tell, there is no direct implication from the claim that a computer has greater power to the conclusion that it deserves to be called intelligent.


“Consider the atom.” Yes, let’s. As Professor Montero accurately summarizes, the history of atomic thought can be thematized as one of ever greater complexity of conception: from the solid little spheres of Democritus to the quantum clouds of uncertainty of Heisenberg, Schrodinger and Bohr. Sadly for Montero, this is actually a disanalogy for her argument about our evolving understanding of intelligence. By her account - that intelligence today is whatever the average American thinks it is (vox populi, vox dei - good grief) - our idea of intelligence has grown not more but less complicated. We are, according to Montero, placated by the appearance of intelligence in response to our “prompt,” and this is considered sufficient. This is a shocking claim to come from an educational professional. Anyone who has ever been in a teaching role knows that the real spark of intelligent engagement on the part of a student comes not in the form of a brilliant response to the teacher’s question, but in the form of a brilliant question - and above all an unprompted one.

The question of what intelligence is, is fundamental. But Montero confidently asserts that instead of defining intelligence and then seeing if AI meets those standards “we do something more dynamic: We interact with increasingly sophisticated A.I., and we see how our understanding of what counts as intelligence changes.” Firstly, what is this “understanding of what counts as intelligence” if not a definition, or at least a premise? Indeed, Montero is clearly aware that postulating a definition and then gradually adjusting it through experience is the fundamental dynamic of epistemology - how we come to have knowledge of the world. Thus by opposing a definition of intelligence as a starting point in favour of interacting with AI, Montero is setting up what’s called a straw man argument: no one ever said we should have a fixed and immutable definition of what intelligence is. But having some kind of definition to begin with is essential.

Secondly, the alternative Montero proposes sounds an awful lot like saying: instead of coming up with our own definition of what intelligence is, let’s just have AI tell us. But “AI” is not a thing with thoughts and opinions. It is a chatbot, an ingenious device designed to respond engagingly and effectively to prompts. It draws on the sum of human knowledge available as text on the internet (so, really just a sliver of human knowledge, all things considered), and can never do more than combine and recombine the parts of that corpus: in other words, reflect our human achievements back to us time and time again. “AI” is not a mind, it is a discourse chameleon. And while chameleons fuel their adaptations to their environment with no more than their fair share of flies, “AI” turns its tricks at the cost of vast amounts of energy, water, money, jobs, etc. “AI” is as Montero states certainly on its way to something even more remarkable than intelligence, wondrous to tell: the wholesale abandonment of intelligence in favour of madness, against a backdrop of amazed gasps and applause.


We cannot, in good conscience, pass over Professor Montero’s comments about consciousness in silence. They deserve to be quoted in full, for the way the professor proudly manifests a total disregard for the moral implications of consciousness:

“Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians. Just as A.I. has prompted us to see certain features of human intelligence as less valuable than we thought (like rote information retrieval and raw speed), so too will A.I. consciousness prompt us to conclude that not all forms of consciousness warrant moral consideration. Or rather, it will reinforce the view that many already seem to hold: that not all forms of consciousness are as morally valuable as our own.”

In the words of Lin Zexu, a Qing dynasty political philosopher, in his 1839 letter to Queen Victoria over the British Empire’s aggressive introduction of opium into the Chinese economy: “let us ask, where is your conscience?” Firstly, it is logically absurd to say, “there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration,” and then contradict this with an admission that there may after all be a link between consciousness and moral responsibility. Secondly, it is philosophically absurd to claim that, because most Americans still express support for animal cruelty through their eating habits, such a link must surely be unimportant.

And for an American above all, at this point in our global history, to speak so glibly of slavery, of the potential enslavement of an entity which she has just publicly announced is on the road to consciousness, is such an enormity as to defy adequate response. That a university professor of philosophy could be complicit in such wanton cheapening of ideas, of discourse, of argumentation, of journalism, speaks volumes about the fascism gripping the USA. Sadly, the shelf space for such volumes is already full to bursting.

Anton Bruder @ajb