I’ve frequently noted that, while researchers in artificial intelligence (AI) and machine learning (ML) often have a lot to say about when their machines will approach, overtake, and even surpass human intellectual achievement, they don’t seem to know much about psychology, linguistics, and the cognitive scientists. I made an explicit argument at some length in a recent article I published in 3 Quarks Daily, Aye Aye, Cap’n! Investing in AI is like buying shares in a whaling voyage captained by a man who knows all about ships and little about whales. In making the argument the only evidence I present is anecdotal – Geoffrey Hinton and Ilya Sutskever in that article, though my beliefs on the issue are based on my reading of the current literature, which is opportunistic and by no means ‘complete,’ which, in any case, would be impossible as the literature is so large.
Now I can present a bit of systematic empirical evidence in the matter. M.R. Frank et al. undertook a bibliometric investigation of citation patters in AI and other disciplines and discovered that, while in the early years, AI interacted with other fields quite a bit, that interaction dropped off over the years. The following chart shows how AI cited other fields:
Its citation of psychology peaked in the middle 1960s and then dropped off steadily until 1990. Its citation of mathematics rose steadily through the period. That’s understandable; I have no complaint about that. The drop in citations to psychology is also understandable, but somewhat more problematic. For it implies that, when AI experts offer judgements about human cognitive capabilities, whether directly or indirectly through comparison with AI, that don’t know what they’re talking about. I suppose that last clause is a bit harsh. Perhaps it would be a bit more accurate to say something like: They don’t know any more than a bright college sophomore who’s taken a psych course or two.
Here's the article and abstract:
Frank, M.R., Wang, D., Cebrian, M. et al. The evolution of citation graphs in artificial intelligence research. Nat Mach Intell 1, 79–85 (2019). https://doi.org/10.1038/
As artificial intelligence (AI) applications see wider deployment, it becomes increasingly important to study the social and societal implications of AI adoption. Therefore, we ask: are AI research and the fields that study social and societal trends keeping pace with each other? Here, we use the Microsoft Academic Graph to study the bibliometric evolution of AI research and its related fields from 1950 to today. Although early AI researchers exhibited strong referencing behaviour towards philosophy, geography and art, modern AI research references mathematics and computer science most strongly. Conversely, other fields, including the social sciences, do not reference AI research in proportion to its growing paper production. Our evidence suggests that the growing preference of AI researchers to publish in topic-specific conferences over academic journals and the increasing presence of industry research pose a challenge to external researchers, as such research is particularly absent from references made by social scientists.
This might be like saying ... the literature on airplanes made much less reference to ornithology after the Wright Brothers.
ReplyDeleteAirplanes fly, but are not much like birds.
Quite possibly, what we are calling AI is sufficiently different from humans that referencing psychology papers is beside the point,
But it does suggest that when AI researchers talk about what humans can and can't do, they may not know what they're talking about and when the talk about AI overtaking and surpassing humans they're just 'hallucinating' in the same way that LLMs do. What they say is consistent with their beliefs, but it may not reflect reality.
Delete