The academic assessment of generative artificial intelligence is intrinsically a matter of source criticism deeply embedded in historic approaches So is the technical functioning of generative AI and LLMs. AI as we have it today is fundamentally a history machine that scrutinizes vast quantities of historical sources (or data) and interprets them statistically in order to output predictions with a high degree of certainty. Any success it enjoys, however, is due not to the immensity of the data it is able to process, but rather the opposite. Its quantitative data sets can never hope to grasp the fullness of reality in all its profound mystery. But, by delineating the field, by excising the qualitative, “AI” creates the very world it is able to predict. Our society is being radically reduced to a matrix of quantitative data points, the bare one dimension presciently described by Herbert Marcuse over 60 years ago.
(Prompt taken from this recent job posting)