Pages in this blog

Friday, September 30, 2022

A guide to AI hype

This is the first tweet in the tweet thread.

Thursday, September 29, 2022

Jacod Collier doews the Bee Gees [How Deep is Your Love]

Pay attention to audience interaction at 7:07 to the end.

GPT-3 based news summaries


Tuesday, September 27, 2022

Why AI won’t revolutionize drug development

Jeffrey Funk - Gary N. Smith, No, AI probably won’t revolutionize drug development, Salon, September 24.

The opening paragraphs:

Drug development is expensive, time consuming, and risky. A typical new drug costs billions of dollars to develop and requires more than ten years of work — yet only about 0.02% of the drugs in development ever making it to market.

Some claim that AI, or artificial intelligence, will revolutionize drug development by ushering in much shorter development times and drastically lower costs. Many scientists and business consultants are especially optimistic about AI's ability to predict the shapes of nearly every known protein using DeepMind's AlphaFold, an artificial intelligence tool developed by Google parent company Alphabet; predicting this information with great detail would be key to quickly developing drugs. As one AI company boasts, "We … firmly believe that AI has the potential to transform the drug discovery process to achieve time and cost efficiencies."

Later:

The relative unimportance of AI in COVID development is consistent with the conclusion of many scientists that AI is not about to revolutionize drug development. The biggest problem is that clinical trials are the longest and typically most expensive part of the process, and AI cannot replace actual trials. Even AI's impact on drug discovery may be limited. A Science op-ed recently argued that, "[AI] doesn't make as much difference to drug discovery as many stories and press releases have had it…. Protein structure prediction is a hard problem, but even harder ones remain."

An often crucial part of drug development is to determine whether a drug binds to the candidate protein, something that MIT researchers have shown that AlphaFold cannot do and DeepMind admits AlphaFold can't do: "Predicting drug binding is probably one of the most difficult tasks in biology: these are many-atom interactions between complex molecules with many potential conformations, and the aim of docking is to pinpoint just one of them." [...]

One huge hurdle for all AI data mining algorithms is that the data deluge has made the number of promising-but-coincidental patterns waiting to be discovered far, far larger than the number of useful relationships — which means that the probability that a discovered pattern is truly useful is very close to zero.

There is more at the link.

Sunday, September 25, 2022

Let’s retire at 50 so while there's still time to do something interesting with our lives

The NYTimes has another article that feeds my interest in changing attitudes toward work: Lisa Rabasa Roepe, Millennials Want to Retire at 50. How to Afford It Is Another Matter, Sept. 24, 2022. It opens:

Although Devangi Patel, 33, has been working as a cardiothoracic anesthesiologist at a large medical center outside Atlanta for only two years, her goal is to afford to walk away from her job at 50.

“That, to me, is the American dream,” she said.

Dr. Patel is not alone in her quest to become financially independent — and at a relatively early age. It seems that a generational shift is well underway: Many millennial workers don’t aspire to retire in their mid- or late 60s, like their parents. Instead, many with professional careers are seeking to leave their jobs by 50 and work for themselves or take a lower-paying role that is more aligned with their interests, studies are showing and financial advisers are finding.

“I want to get to a point where I don’t have to work for money anymore, and I can work for pleasure,” Dr. Patel said.

But reaching that goal has been harder than Dr. Patel anticipated. Although she contributes to a 401(k) and a Roth individual retirement account, invests in stocks with a brokerage account and maxes out her health savings account, she is also paying off a $250,000 loan for medical school and paying for her wedding in December.

There’s more at the link.

The Tech Trinity

Saturday, September 24, 2022

AI Doom cult at 3QD + TEDx talk and essay contest

Two weeks ago I published another essay at 3 Quarks Daily:

It’s a decent essay, in particular I like the capsule history I sketch for the emergence of AI Doom as an issue. But it’s hardly my last word on the subject.

* * * * *

Here’s a TEDx talk from 2017 by a high school student, Andrew Zeitler, entitled, “The Truth Behind Artificial Intelligence.”

It opens with two quick versions of the future, one in which AI is the center of a world of luxury and enjoyment and the other in which AI destroys the human race. At about 14:08 he says the technological singularity will arrive in 2029. He concludes by quoting Edsgar Dijkstra: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

He's bright and engaged but, as far as I can tell, does not have deep technical knowledge of AI. He presents these ideas as obvious truths. I have no idea how many people share his beliefs. But relatively few people are in a position to think these matters through with any degree of sophistication.

I have no idea when these issues will be resolved. But I suspect that, at this level, they are issues being driven by symbolic concerns, not technical ones. We’re working through a cultural mythology for the 21st century.

* * * * *

This is epistemic theater, but also PR (signaling). For those purposes the contents of the essays are irrelevant. How much more such theater is in store for us?

Saturday, September 10, 2022

The Threat of AI [jobs, not rogue AI]

Bump: from June 2017. This sort of thing is on my mind these days.
* * * * *
 
Kai-Fu Lee has an important op-ed in the NYTimes, "The Real Threat of Artificial Intelligence". He begins by pointing out that all too many discussions of the problems posed by AI turn on the so-called "singularity", when AI will surpass human intelligence. He points out, quite rightly IMO, that however interesting such questions are "they are not pressing". Our best AI tools have little or no understanding of anything, but they nonetheless can do useful tasks and are improving rapidly. These tools will take existing human jobs without replacing them with new jobs.
This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?
The rest of the op-ed addresses these questions and is well-worth reading.

He calls for high tax rates with the government subsidizing "most people's lives and work". The USA and China may well be able to do this. Most countries will not.
So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.
Yikes!

Addendum, 6.26.17: Mark Liberman has posted about this over at Language Log, and has some interesting links to remarks by Norbert Wiener as well as an interesting set of replies.

Duke Ellington wrote a suite for Queen Elizabeth II

0:00 Sunset and the Mocking Bird
3:51 Lightning Bugs and Frogs
6:45 Le Sucrier Velours
9:33 Northern Lights
13:11 The Single Petal of a Rose
17:19 Apes and a Peacock

Ted Gioia tells the story:

In a historic Duke-meets-Queen [in 1958] Ellington served up his famous charm for the monarch. When she asked him whether this was his first visit to Britain, Duke replied that his initial trip to London was in 1933, “way before you were born.” This was out-and-out flattery, because Queen Elizabeth had been born in 1926—but she played along with the game. “She gave me a real American look,” he later recalled, “very cool man, which I thought was too much.”

Give Duke credit for savviness. He understood that even a queen wants to hear how young she looks. Ellington followed up saying that Her Majesty “was so inspiring that something musical would come out of it.” She told him that she would be listening.

According to Ellington’s son Mercer, his father began working on the music to The Queen’s Suite as soon as he got back to his hotel room. He enlisted colleague and collaborator Billy Strayhorn. In addition to royal inspiration, the work also borrowed from the natural world: the opening movement draws on birdsong heard during a Florida visit, another section was a response to an unexpected encounter with “a million lightning bugs” serenaded by a frog. The best known part of the Suite, “The Single Petal of a Rose,” was spurred by a floral display on a piano at a friend’s home. [...]

By early 1959, the finished work was ready for performance. The Queen’s Suite was now a 20-minute work in six movements. The band recorded it over the course of three sessions in February and April 1959. A single golden disc was made, and sent to Buckingham Palace. In order to ensure that no other copies were released, Ellington reimbursed Columbia, his label, some $2,500 in production costs, and thus retained personal ownership of the master tapes.

There's more at the link.

Thursday, September 8, 2022

Prediction as a way of foreclosing the future – And no foreclosure is more absolute than apocalypse [AGI]

I’ve just come across a recent article that sheds light on one of the signal features the contemporary AI culture, in particular, the subculture devoted to the study and prevention of existential risk from advanced artificial intelligence. The article:

Sun-Ha Hong, Predictions Without Futures, History and Theory, Vol 61, No. 3: July 2022, 1-20. https://onlinelibrary.wiley.com/doi/epdf/10.1111/hith.12269

Abstract: Modernity held sacred the aspirational formula of the open future: a promise of human determination that doubles as an injunction to control. Today, the banner of this plannable future is borne by technology. Allegedly impersonal, neutral, and exempt from disillusionment with ideology, belief in technological change saturates the present horizon of historical futures. Yet I argue that this is exactly how today’s technofutures enact a hegemony of closure and sameness. In particular, the growing emphasis on prediction as AI’s skeleton key to all social problems constitutes what religious studies calls cosmograms: universalizing models that govern how facts and values relate to each other, providing a common and normative point of reference. In a predictive paradigm, social problems are made conceivable only as objects of calculative control—control that can never be fulfilled but that persists as an eternally deferred and recycled horizon. I show how this technofuture is maintained not so much by producing literally accurate predictions of future events but through ritualized demonstrations of predictive time.

From page 4:

Notably, today’s society feverishly anticipates an AI “breakthrough,” a moment when the innate force of technological progress transforms society irreversibly. Its proponents insist that the singularity is, as per the name, the only possible future (despite its repeated promise and deferral since the 1960s—that is, for almost the entire history of AI as a research problem). Such pronouncements generate legitimacy through a sense of inevitability that the early liberals sought in “laws of nature” and that the ancien régime sought in the divine. AI as a historical future promises “a disconnection” from past and present, and it cites that departure as the source of the possibility that even the most intractable political problems can be solved not by carefully unpacking them but by eliminating all of their priors. Thus, virtual reality solves the problems with reality merely by being virtual, cryptocurrency solves every known problem with currency by not being currency, and transhumanism solves the problem of people by transcending humanity. Meanwhile, the present and its teething problems are somewhat diluted of reality: there is less need to worry so much about concrete, existing patterns of inequality or inefficiency, the idea goes, since technological breakthroughs will soon render them irrelevant. Such technofutures saturate the space of the possible with the absence of a coherent vision for society.

That paragraph can be taken as a trenchant reading of long-termism, which is obsessed with prediction and, by focusing attention on the needs for people in the future, tends to empty the present of all substance.

Nor does it matter that the predicted technofuture is, at best, highly problematic (p. 7):

In short, the more unfulfilled these technofutures go, the more pervasive and entrenched they become. When Elon Musk claims that his Neuralink AI can eliminate the need for verbal communication in five to ten years [...] the statement should not be taken as a meaningful claim about concrete future outcomes. Rather, it is a dutifully traditional performance that, knowingly or not, reenacts the participatory rituals of (quasi) belief and attachment that have been central to the very history of artificial intelligence. After all, Marvin Minsky, AI’s original marketer, had loudly proclaimed the arrival of truly intelligent machines by the 1970s. The significance of these predictions does not depend on their accurate fulfillment, because their function is not to foretell future events but to borrow legitimacy and plausibility from the future in order to license anticipatory actions in the present.

Here, “the future . . . functions as an ‘epistemic black market.’” The conceit of the open future furnishes a space of relative looseness in what kinds of claims are considered plausible, a space where unproven and speculative statements can be couched in the language of simulations, innovation, and revolutionary duty. What is being traded here are not concrete achievements or end states but the performative power of the promise itself. In this context, claims do not live or die by specifically prophesized outcomes; rather, they involve a rotating array of promissory themes that create space for optimism and investment. Cars that can really drive themselves without alarming swerves, facial recognition systems that can really determine one’s sexuality, and so on—the final justifications for such totally predictive systems are always placed in the “near” future, partly shielded from conventional tests of viability or even morality.

The prediction of an AI apocalypse is not, however, an optimistic one. But it does affirm the power and potency of the technology. And the notion of the future as an ‘epistemic black market’ points up the idea that the predictive activity is a form of theater, epistemic theater.

Cosmosgrams:

It seems fitting, then, to think through technofutures via a concept taken from religious studies by a historian of science. John Tresch describes cosmograms as unified pictures of the world—“central points of reference that enable people to bring themselves into agreement.” This is not to say that a cosmogram boasts an explicit theory of everything, which might then be proven or disproven like a formula. Nor do such “unified pictures” enact a total and dog- matic belief upon their subjects. Technofutures as cosmograms do not insist on a particular technology or technological outcome per se. Rather, predicting the emergence of intelligent robots by a certain year is a way to build and replenish a familiar array of beliefs about human transcendence and technological solutionism while accommodating a wide variety of definitions of intelligence, robots, and progress.

Here I would emphasize the phrase “enable people to bring themselves into agreement.” AI is regarded as transformative technology. Reaching agreement on just how AI should best transform society would be difficult. But it one believes that AI is likely to pose an existential threat, that presents a much narrower target for which agreement must be reached. All can agree that they do not want humankind to be extinguished.

There’s more in the article.

Obama’s Eulogy for Clementa Pinckney 1: The Circle of Grace

Another bump to the top. I wrote several posts on Obama's Eulogy for Clementa Pinckney and gathered them into a working paper, Obama’s Eulogy for Clementa Pinckney: Technics of Power and Grace, along with some other posts on the Black church. This is the first of those posts. I argue that the eulogy exhibits ring-form composition. I conclude with the complete text of the eulogy.
* * * * *
 
Make no mistake, it was a remarkable performance. Nominally a eulogy, very much a eulogy. But also a sermon on the past and future of race relations in America.

Though Rev. Pinckney’s funeral was held on June 26, and I heard of Obama’s eulogy shortly thereafter, and heard about it again, and again, I didn’t bother to watch it until a couple of days ago when I was reflecting on some remarks the economist Glenn Loury and linguist John McWhorter made about the ‘authenticity’ of Obama’s performance. After all, Obama wasn’t raised in the church. And yet he chose to don the vestments of a black preacher, the rhetorical and oratorical style, to deliver his eulogy.

I’ll get around to Loury and McWhorter in a later post. In this one I want to look at the eulogy itself, which pretty much took the form of a sermon addressed to the nation. In my preliminary analysis that sermon has five basic parts as follows:
1. Prologue: Address to his audience, quoting of a passage from the Bible.

2. Phase 1: Moves from the Clementa Pinckney’s life to the significance of the black church in history.

3. Phase 2: The murder itself and presence of God’s grace.

4. Phase 3: Looks to the nation, the role racism has played, and the need to move beyond it.

5. Closing: Amazing Grace.
In the course of this analysis I will be referring to specific paragraphs by number. I have appended the entire text to this post and have numbered the paragraphs. Furthermore, I have uploaded an analytical table I am using as I think about the text. Each paragraph appears in the table along with comments here and there. You may view or download this document here:

* * * * *

I want to begin by quoting from an article Michiko Kakutani published on July 4, Obama’s Eulogy, Which Found Its Place in History:
A draft of the Charleston eulogy was given to the president around 5 p.m. on June 25 and, according to Mr. Keenan, Mr. Obama spent some five hours revising it that evening, not merely jotting notes in the margins, but whipping out the yellow legal pads he likes to write on — only the second time he’s done so for a speech in the last two years. He would rewrite large swaths of the text.

Mr. Obama expanded on a short riff in the draft about the idea of grace, and made it the central theme of the eulogy: the grace family members of the shooting victims embodied in the forgiveness they expressed toward the killer; the grace the city of Charleston and the state of South Carolina manifested in coming together in the wake of the massacre; the grace God bestowed in transforming a tragedy into an occasion for renewal, sorrow into hope.
First, I would love to be able to compare Keenan’s draft with the eulogy that Obama delivered. Did that draft open with “Giving all praise and honor to God”? Did it quote Scripture at the beginning (Hebrews 11:13), as is typical of sermons? That is to say, did Keenan know he was drafting a sermon, or did that happen as Obama devoted five hours and who knows how many pieces of 8.5 by 14 lined yellow paper to the rewrite?
 
Kakutani is quite right to single out grace as the sermon’s theme. But Obama did more than use the word a lot – 35 times by my count. The word didn’t appear until about 1300 words in to this 3000 word sermon, that is, a bit before the mid-point. Roughly speaking, then, we can divide the sermon into two movements, 1) before “grace” and 2) after “grace”. Once we’ve done that the eulogy’s overall logic begins to reveal itself.

Let’s set the prelude and closing aside for a moment. Each is short and both are formulaic. The bulk of the sermon is in between, roughly 2700 words.

Let’s skip to the middle, the part that I’ve called the transition. It starts with paragraph 21:
We do not know whether the killer of Reverend Pinckney and eight others knew all of this history. But he surely sensed the meaning of his violent act. It was an act that drew on a long history of bombs and arson and shots fired at churches, not random, but as a means of control, a way to terrorize and oppress. (Applause.) An act that he imagined would incite fear and recrimination; violence and suspicion. An act that he presumed would deepen divisions that trace back to our nation's original sin.
This is the first time that Dylann Storm Roof is mentioned, but not by name, never by name. Obama establishes that this action was not a personal one. It was not aimed at individuals as individuals. In the killer’s mind it was a symbolic act. He was attacking a group of people and an institution. The paragraph ends by referencing “our nation's original sin”, by which, I assume, Obama meant slavery, but/and of course the phrase “original sin” has deep Biblical resonance.

In paragraph 22 Obama says, “God works in mysterious ways”. Utterly standard, a cliché. But in this context, in this eulogy, we ARE immersed deep in mystery, that of the violent willful senseless deaths of nine people. The next paragraph, 23, has the first use of “grace” and references the “alleged killer” twice.

We are now poised at the mid-point of our trajectory, our oratorical and ritual journey.

Let us skip to paragraph 28, which begins: “As a nation, out of this terrible tragedy, God has visited grace upon us, for he has allowed us to see where we've been blind.” At this point we’ve moved through grace and are now prepared to contemplate that original sin – “allowed us to see where we've been blind” – and what we must set out to do.

What’s astonishing is how Obama got us there. Let us look at the opening of paragraph 23: “He [the alleged killer] didn't know he was being used by God. (Applause.)” Repeat: “He didn't know he was being used by God.”

Didn’t know he was being used by God.

What does that say; what does it imply? Was Dylann Roof’s murderous act willed by God? That’s what Obama’s words can be taken to imply, but it’s not quite what Obama is actually saying. His formulation – “being used by” – distances him and us from that implication – and remember, this is being delivered and comprehended in real time during which there is no time (nor inclination to) reflective second guessing. Roof does something and God takes it up and uses it.

And now we must take it up and use it. That’s what this sermon is about, taking it up and using it.

God works in mysterious ways.

We are deep in the thicket of what I’ve elsewhere termed “myth-logic.” Myth logic is rigorous in its own way, but those ways are not those of science. They are the ways of the heart.

Obama’s job is to say something that will be uplifting, that will allow people to leave the service feeling good about themselves and the world they live in, feeling ready to go out and change things. And he has to do that in the middle of terrible tragedy, when people are grieving for their lost husbands, wives, sons and daughters, relatives and friends, and for their nation. How do you create joy from grief? That’s a job for myth logic, not for rational deduction.

* * * * *

There’s much more to be said about this text, and perhaps I’ll devote another post to analysis. But I’ll close with some remarks about the prelude and the closing.

In the third paragraph Obama quotes Scripture:
"They were still living by faith when they died," Scripture tells us. "They did not receive the things promised; they only saw them and welcomed them from a distance, admitting that they were foreigners and strangers on Earth."
That sets up a template for the rest of the sermon. We are those who “did not receive the things promised” and yet we are too live by faith until we die, as those nine others did. And it sets up the theme of not being able to see (God’s mysterious ways) that is central to “Amazing Grace”. Divine grace itself is a mystery, coming as it does through no act or will of our own. That word “grace” pervades the second half of the eulogy, which ends in that hymn.

Thus Obama, in choosing his Scripture at the beginning, was also setting us up for the end. It is the song, which everyone sang along with him, that provided the salving grace people sought in that funeral.

* * * * *

Those who have been following New Savanna for awhile know that I have a particular interest a kind of textual symmetry known as ring-composition or ring-form. Such texts unfold as follows:

A, B, C … X … C’, B’, A’

A’ echoes A, B’ echoes B and so forth, with X being structurally central. Is Obama’s eulogy/sermon a ring-form text? Perhaps it is, and perhaps I’ll make an explicit argument in a later post. For now I note only that its structural center is that section where Obama talks about the murder, the only place he does so, and where he first invokes God’s grace.

Some Thoughts on the Discipline [Literary Criticism]

I'm bumping this to the top of the queue on general principle – and – to remind myself that, to the extent that I have a home discipline, it is the study of literature. I originally published this in October of 2015. I've written considerably more about literary study since then.
* * * * *
 
It appears that I’ll be publishing a working paper on the profession of academic literary criticism sometime in the next week or three, depending on what other projects I’m working on. This is a topic I’ve written quite a bit on, so much that it seemed to me that there should be a section in that working paper that serves as a guide to much, if not quite all, of that work.

This is a trial run on that section. As such it is a lightly annotated bibliography, where the annotations mostly consist of the abstracts I prepared for each working paper. I’ve divided them into four sections: Bridges, Description, Psychology, and Computational Criticism.

Bridges


How you get from here to there. The first one is how you get (how I got) from Lévi-Strauss’s practical work on myth to cognitive science. The second one seeks to justify the ways of literature and of humanists to Steven Pinker. The third places my conception of (the potentialities of) literary criticism in the context of Bruno Latour’s actor networks and his modes of being.

* *

Beyond Lévi-Strauss on Myth: Objectification, Computation, and Cognition (2015) 30 pp.

Abstract: This is a series of informal notes on the structuralist method Lévi-Strauss used in Mythologiques. What’s essential to the method is to treat narratives in comparison with one another rather than in isolation. By analyzing and describing ensembles of narratives, Lévi-Strauss was able to indicate mental “deep structures.” In this comparing Lévi-Strauss was able to see more than he could explain. I extend the method to Robert Greene’s Pandosto, Shakespeare’s The Winter’s Tale, and Brontë’s Wuthering Heights. I discuss how Lévi-Strauss was looking for a way to objectify mental structures, but failed; and I suggest that the notion of computation will be central to any effort that goes beyond what Lévi-Strauss did. I conclude by showing how work on Coleridge’s “Kubla Khan” finally led me beyond the limitations of structuralism and into cognitive science.

* *

An Open Letter to Steven Pinker: The Importance of Stories and the Nature of Literary Criticism (2015) 20 pp.

Abstract: People in oral cultures tell stories as a source of mutual knowledge in the game theory sense (think: “The Emperor Has No Clothes”) on matters they cannot talk about either because they resist explicit expository formulation or because they are embarrassing and anxiety provoking. The communal story is thus a source of shared value and mutual affirmation. And the academic profession of literary criticism came to see itself as a repository of that shared value. Accordingly, in the middle of the 20th century it turned toward interpretation as its central activity. But critics could not agree on interpretations and that precipitated a crisis that led to Theory. The crisis has quited down, but is not resolved.

* *

Literary Criticism 21: Academic Literary Study in a Pluralist World, Revised September 2014 (2014) 42 pp.

Abstract: At the most abstract philosophical level the cosmos is best conceptualized as containing various Realms of Being interacting with one another. Each Realm contains a broad class of objects sharing the same general body of processes and laws. In such a conception the human world consists of many different Realms of Being, with more emerging as human cultures become more sophisticated and internally differentiated. Common Sense knowledge forms one Realm while Literary experience is another. Being immersed in a literary work is not at all the same as going about one's daily life. Formal Literary Criticism is yet another Realm, distinct from both Common Sense and Literary Experience. Literary Criticism is in the process of differentiating into two different Realms, that of Ethical Criticism, concerned with matters of value, and that of Naturalist Criticism, concerned with the objective study of psychological, social, and historical processes.

Description


The description of literary form is the foundation of a revivified literary criticism. But description as I’ve come to understand it, both from doing and from theorizing about it, is more subtle and difficult that traditional criticism warrants. It is also more important, far more important. Without four centuries of painstaking naturalistic description available to him, Darwin would have had no empirical basis for his work. That’s where literary criticism is now: Lacking careful and detailed descriptions of the texts we’re entrusted with, we lack the basis for objective accounts of textual phenomena. Producing these descriptions should be a prime intellectual priority.

* *

Description as Intellectual Craft in the Study of Literature (2013) 33 pp. https://www.academia.edu/4262467/Description_as_Intellectual_Craft_in_the_Study_of_Literature

Abstract: This is a series of notes in which I argue that better descriptive methods are a necessary precondition for more sophisticated and objective literary criticism. Description, though it does not give unmediated access to texts, requires methods for objectifying texts, methods which must be discovered in the doing. By way of comparison I discuss the role of description in biology and I discuss the use of images and diagrams as descriptive devices. Lévi-Strauss on myth and Franco Moretti on distant reading, though quite different, are up to the same thing: objectification.

* *

Description 2: The Primacy of the Text (2013) 41 pp. https://www.academia.edu/4866743/Description_2_The_Primacy_of_the_Text

Abstract: These notes consist of five posts discussing the description of literary texts and films and five appendices containing tables used in describing to manga texts (Lost World, Metropolis) and two films (Sita Sings the Blues, Ghost in the Shell 2: Innocence). The posts make the point that the point of description is to let the texts speak for themselves. Further, it is through descriptions that the texts enter intellectual discourse.

* *

Description 3: The Primacy of Visualization (2015) 48 pp. https://www.academia.edu/16835585/Description_3_The_Primacy_of_Visualization

Abstract: Describing literary texts requires a mode of thought distinct from the discursive interpretation of them. It is a mode of thought in which various visual devices are central. These devices include: tables, trees and mental spaces, directed graphs and “sketchpads”. Visualization facilitates the objectification of literary form and objectification is necessary for objectivity. With objectivity comes the possibility of cumulative knowledge.

On the moral value of work

The title of Peter Coy’s Labor Day op-ed caught my attention: Work Is Intrinsically Good. Or Maybe It’s Not?, NYTimes, September 5, 2022. After quoting Thomas Carlyle, Coy observes:

Many of us agree that work is inherently good, character-building and a manifestation of one’s seriousness and reliability. However, others of us make a forceful argument that work for its own sake is pointless and ridiculous. Who’s right?

The antiwork movement is the one that’s getting most of the attention lately. There’s China’s “lying flat” movement. There’s quiet quitting. There’s the Great Resignation. And there’s the fact that lots of people simply don’t want to work anymore. In the United States, the labor force participation rate has fallen for two decades, and there were more than 11 million unfilled jobs on the last day of July this year.

But on this Labor Day, I want to focus on the other group: those who are anti-antiwork (or simply pro-work).

But then he equivocates, but only for a moment:

Why is work valuable for its own sake, though? In textbook economics, after all, leisure is the good stuff and work is the necessary evil. “How have so many humans reached the point where they accept that even miserable, unnecessary work is actually morally superior to no work at all?” the anthropologist David Graeber asked in a 2018 book.

Last week I read a smart piece of psychological research that I think answers Graeber’s valid question. Its title says it all: “The Moralization of Effort.”

Human beings evolved in societies that valued cooperation, the theory goes. People who work hard tend to be team players. So working hard in primitive societies was a costly but effective way of signaling one’s trustworthiness. As a result, our brains today are wired to perceive effort as evidence of morality. “Just as people will engage in unnecessary prosocial behavior to differentiate themselves as a superior cooperative partner,” the paper says, “displays of effort, including economically unnecessary effort, may serve a similar function.”

This concept isn’t new, but the “Moralization of Effort” paper tests it, and finds strong support for it, using seven clever experiments involving hundreds of people in the United States, South Korea and France. The choice of countries is interesting. Koreans are known to be hard workers. They even have a word for death by overwork: gwarosa. The French work fewer hours than most and pride themselves on their savoir vivre.

Later:

While the work-is-good mind-set may have had evolutionary advantages, it can backfire in the modern world, the authors contend. Millions of workers may “signal moral worth through structured drudgery,” they write, echoing Graeber. “We fear it has also created harmful incentive structures that reward workaholism and joyless devotion to mundane efforts that produce little value beyond the signal of effortful engagement.”

I think the pro-work camp, while quieter, remains larger and more influential than the antiwork camp that’s drawing all the headlines lately. Just look at the strong opposition to the universal basic income idea championed by Andrew Yang and others.

I count myself among those skeptical of the (strenuous) moralization of work. But I’m not sure what to replace it with.

There’s more at the link.

* * * * *

Two posts on the skeptical side:

Tuesday, September 6, 2022

Varieties of Spiritual Experience [35% of people in US & UK report spiritual experience]

Robert Wright interviews David B. Yadan, author of The Varieties of Spiritual Experience.

At about 14:11:

David B. Yaden: Right in the beginning of this project...we thought 'let's run a survey and see how these Freudian views and these Jamesian views hold up.' And it was meant to be footnotes thoughout the book. We'd say 'here are some basic descriptive data bearing on this question.' [...] The main thing that's interesting is that these massive polling companies like Gallup, the General Social Survey, and others, have for decades run these big surveys asking people, 'have you ever had a spiritual or mystical experience.' [...] Somewhat shockingly about 35% of populations in the US and the UK endorsed these statements completely.

What’s it mean, this nursery rhyme with the diddle fiddle moon spoon?

I'm bumping this to the top to remind me of my intellectual roots in literary criticism.

* * * * *

cat&fiddle

You know, of course, that that’s (meaning) not quite what I’m after. It’s not that I don’t do meaning at all, but I tend to treat meaning as secondary. Form and construction, that’s my game. Still, meaning is what literary criticism is about – and this IS a literary text, albeit one of modest scope and wide renown and dating back at least to the sixteenth century (Wikipedia) – so let’s start there.

A minimal reading

Let’s start with what Attridge and Staton call a minimal reading (The Craft of Poetry, 2015). Set aside symbolic and hidden meanings. What’s this poem conjure up before the mind’s eyes and ears?
Hey, diddle, diddle,
The cat and the fiddle,
The cow jumped over the moon;
The little dog laughed
To see such sport,
And the dish ran away with the spoon.
The first line consists of the exclamation, “hey”, which is often used as a greeting, followed by two repetitions of “diddle”. What’s “diddle” mean? That’s tricky. According to the Oxford online dictionary it can mean cheat or swindle, but that requires an object. It can also mean “pass time aimlessly or unproductively”, but that’s North American usage, and we’ve got an earlier version of this rhyme 1765 in London (Wikipedia). It is also slang for “have intercourse with”, but that requires an object. Fortunately the Wiktionary tells us that it can also be “a meaningless word used when singing a tune or indicating a rhythm.” Bingo! That’s what I thought.

So, our first line would seem to be just a greeting. To whom? No one is references. Perhaps to anyone listening.

On the second line, why the cat and the fiddle? My first thought was that fiddles are strung with catgut. Which is true, but apparently catgut in that sense doesn’t come from cats; it’s generally made from sheep or goat intestines (Wikipedia). “Cat and Fiddle” is also a common name for inns and is a common image in early medieval illuminated manuscripts (Wikipedia). But why? My best guess would be that unskilled fiddle playing sounds a bit like meowing. Let’s go with that.

But how do we get from the first line to the second, what’s the connection? Well, yes, there’s the rhyme, but that doesn’t mean anything, it’s just, you know, sound.

And where’d the cow and the moon come from in the third line? And what’s the literal meaning? Are we being asked to imagine a cow jumping a quarter million miles into space, rounding the moon, and returning back to earth? I suppose we could note that the image was in use a long time before anyone knew the distance, but even when they didn’t know the distance they knew perfectly well that the moon was so far away that no cow could possibly jump that high.

On the other hand, one can imagine that if you were very close to the ground while the moon was low in the sky, well then it’s just barely conceivable that an energetic and athletic cow might appear to rise above the moon in a mad dash of some sort. That’s certainly the kind of visual image the line conjures up (though without the mad dash) and that’s what you (more or less) see if you google “cow jumped over the moon” and look at the images. Moreover the fourth and fifth lines inform us that a little dog (therefore low to the ground) got a chuckle out seeing that, that is, a cow over the moon.

So, that’s what it is. I don’t see that pose any particular problems – yeah! a laughing dog, right – which leaves us with the last line. It’s perfectly straight forward. One can imagine it easily enough – the folks who made animated cartoons for the rhyme certainly did. It just doesn’t make any sense. Dishes and spoons are inanimate objects. They can’t run anywhere, either alone or together. To assert as much is to make a category mistake. Dishes and spoons can’t run and ideas can’t be green, much less colorless green, nor can they sleep, much less sleep furiously.

So, what have we got? The opening line greets someone and the marks time for four syllables. Then we have a cat and a fiddle, together, perhaps screeching in (dis)harmony. What that has to do with the delicious image of a cow jumping over the moon, I surely don’t know. But the dog thought it was funny enough. Perhaps the moon was a distraction so the dish and the spoon could run off without being observed by the dog.

Perhaps they’re running off to, you know, canoodle, perhaps as a prelude to diddling, in the vulgar sense. But this is a nursery rhyme, though there’s no reason a nursery rhyme can’t have meaning for the nurse that’s invisible to the child. But we’re looking for a minimal reading here, and that rules out hidden meanings.

But how does it work?

And yet this little bit of nonsense works, and works very well, otherwise it wouldn’t have stuck around across the centuries.

How does it work? That’s surely the question to ask. I’m asking it all the time. Asking is one thing, answering is another. But if we don’t ask, we’ll never get any answers, will we?

I can at least make some observations. The first two lines are connected by rhyme, diddle, fiddle. That is, they are connected by sound. And the conjunction in that second line is, by my conjecture above, about sound; scratching fiddling sounds like cats meowing.

The third and last line are likewise connected by sound, moon, spoon. The fourth and fifth lines, however, are not connected by sound to any other lines or to each other. How odd, how interesting. They seem, rather, to reinforce the obvious image one has from the third line. And, of course, the introduction of the dog gives us a third domestic animal after the cat and the cow. There are no animate beings in the last line, however. But we are told that these inanimate objects ran, that they are in fact animate.

I suppose I could continue with this line of inquiry, but I fear that I’d start repeating things I’ve already said. Surely there is more that needs to be said, but I don’t know how to say it. Yes, the text is playing with the difference between animate and inanimate. The animate cat is playing an inanimate fiddle. The animate cow is jumping over the inanimate moon. The animate dog is laughing at something it sees.

Now that’s interesting. In playing the fiddle the cat is expressing itself. Is the cow’s jump an expressive act as well? In laughing the dog is expressing ITSELF. But the dog is also seeing, a sensory verb. The first and only one. We’re playing around with things animate creatures can do. And then, and then we have that last line, with inanimate objects acting in an animate way. I can’t help but feeling that, in context, the effect is to foreground animacy, liveliness, IN-ITSELF.

And even if I could clarify what I mean by that, perhaps by reference to a paper Dave Hays and I wrote about metaphor some years ago [1], that still won’t make the connection to sound. For that’s what we’ve got to do.

In the brain there’s motor tissue and there’s sensory tissue. Internally they’re much the same. It’s their connections outside the brain, to muscles or sense organs, that give them different functions. With language we’ve got both sensory and motor action on the side of signifiers and signifieds. You move your vocal organs to say “diddle” and you hear it as you do so. Similarly, you use your muscles to jump while you see the world change as you move through the air; you observe someone else as they jump and feel their motion residually in your motor system.

What’s going on in this little rhyme is that all these things, all these neural flows, are part of one unified action, an action that’s unified in a sense deeper than simultaneity. In poetry sound and sense are part of the same neural flow rather than being two simultaneous neural flows as they are in ordinary speech. THAT’s what we’ve got to understand. THAT’s what rhyme and meter are about. THAT’s what connects diddle diddle (nonsense sound) with the (merely) implicit sound of the cat’s fiddling. But also with the (implicit) sound of dog’s laugh. And what of moon spoon?

What of the diddle fiddle moon spoon?

More later, much later.

Vehicularization: A Control Principle in a Complex Animal with several levels of Modal Organization (with note on King Kong)

Another bump to the top, this time on general principle. Think of vehicularization as a control mechanism of natural intelligence.

* * * * *
 
I'm bumping this to the top because it has direct bearing on my current ring-composition work. Consider the temporal horizons of actions undertaken in each segment of King Kong. It opens in New York. Denham dreams of getting rich by making a movie of King Kong on Skull Island. That's indefinitely in the future. Once on board the ship the temporal horizon moves closer; now action is directed toward getting to Skull Island. Once on Skull Island the goal is simply to get into Kong's territory. And once inside Kong's territory the goal becomes simple and very immediate, to survive.

Once Darrow is successfully rescued, Kong is captured, and they're back in New York. Now the goal is simply to get rich, which will happen real soon now, as soon as the money starts rolling in from exhibiting Kong. And, of course, Darrow and Driscoll plan to get married. Important, but it doesn't drive the action. What drives the action is Kong's escape. Now, once again, the goal is immediate survival. Once Kong is dead, the couple can get married and live happily ever after.

Well, vehicularization is about temporal horizons in the control of behavior. It's about the brain, but we can also map it onto the geography of the journey in King Kong, giving us a temporal myth-logic. A similar analysis can be done for Heart of Darkness, though that's going to be tricky because of the double narration, and Gojira.

* * * * *

The following post consists of a section that David Hays and I removed from our article, Principles and Development of Natural Intelligence (downloadable PDF). The final draft was long and the journal editors asked that we cut it. This is much of what we cut.

I’ve interpolated some comments in italics and appended a later note. While the passage is best read in the context of the whole article—which explains how we used the mathematical notion of diagonalization and has a full discussion of behavioral mode (downloadable PDF)—it can be read independently. The general idea is of one system being nested within another such that the deepest, the innermost system, leads to the most immediate satisfactions, but also has the most restricted behavioral scope. A system with greater scope, while not capable of satisfying a basic need (e.g. for food, water, sex, companionship) is able to move the animal to a place in the environment where the innermost system can exact satisfaction. The outer system is thus serving as a vehicle for the inner system.

Vehicularization

This story can be brought to closure with the concept of vehicularization. Neurophysiologically, the concept is a generalization of Paul MacLean's (1978) familiar concept of the triune brain. In McLean's conception the mammalian brain consists of a complex of reptilian grade embedded within one of paleomammalian grade which is in turn embedded within one of neomammalian grade. Restating this general idea in our terms, with the emergence of the diagonaliztion principle the modal system becomes embedded within the emergent sensorimotor system. As each new principle is implemented in new tissue the previous system becomes embedded within it.

Behaviorally, a higher order system serves as a vehicle for moving the organism to a place in the environment where control can safely be transferred to a lower order system. Conversely, when a lower order system is blocked without having satisfied the exit requirement of the current mode, transfer can be given over to a higher level system, which will then transport the organism to a location in the environment where satisfaction of the current exit condition is more likely. The overall effect of behavioral vehicularization can be stated in terms of a hill-climbing search strategy.
For example, you’ve been working hard and, all of a sudden, you notice that you’re ravenously hungry. Your lowest level system, the one that is actually capable of satisfying your hunger, wants to grab some food and start chewing. If a cheese burger or a head of lettuce is close at hand, you can see it and grab it, it does into action and your immediate hunger is sated. If no food is available, however, you turn control over to a higher-level system that then goes looking for food. If you are in your home, you’ll go to the kitchen or the pantry and see what’s available, now.
In hill-climbing a gradient is placed on the environment, the search space, such that locations most likely to satisfy the search, to fulfill the system's current need, are higher than unpromising locations. [Don't confuse the physically real environment and the abstract search space. The hill being climbed is abstract.] The organism then climbs to the top of the nearest hill and, if all is well, is satisfied. However, hill-climbing has a weakness; the local maximum may not be the global maximum for the search space. When this is the case the system is stuck at the local peak with no way of moving down it and then over to the global maximum.
So, you’re hungry. There’s nothing immediately to hand, but you smell something potentially delicious. You follow your nose and it leads you to a window. There’s no food on the window sill and the window’s filled by a screen you can’t break through. What do you do? If you insist on following your nose, you’re stuck. That’s a local maximum. So you’ve got to stop following your nose and do something else to take to a place in your environment where following your nose will be more successful.
In such a situation a modal organism can only exit the current mode. That being done, the gradient which trapped it is lifted and it is now moving along a different gradient, quite possibly one defined by an exploratory or search mode. That is budgeting. Vehicularized organisms can deal with the situation by transferring control to a higher order system which can then move the lower order system away from its local maximum to a position in the environment closer to the global maximum for that lower system. When that location is reached control is then transferred to the lower system, which climbs the hill to satisfaction.

In such a vehicularized organism the modal system is stratified. The reorganizing mode [learning] is one which permits the resetting of the conditional elements of on-blocks [simple control triggers: ON Condition X DO Action Y], thus changing the coupling of the organism to its environment. The higher level modes, play, imitation, and language, permit a great deal of exploration and activity before the conditions and actions of on-blocks are committed. These modes are probably particularly important in building very complex conditional and actional elements.

This is particularly important in ontogeny. It is known that the human nervous system matures roughly in the phylogenetic order of its components (Milner 1976); that is the innermost vehicles mature first. This suggests that the tissue which will be implementing the higher level principles matures under the guidance of the electro-chemical gradients generated by the activity of the lower vehicle, which is controlling the behavior (cf. Edelman 1978). Thus, we know one feature of the development of the nervous system is the early proliferation of synapses in a region followed by the elimination of many of these synapses (Cowan 1979, Purves and Lichtman 1980). That elimination might, for example, be the primary method of diagonalizing in cortical tissue. Once tissue had been diagonalized its critical period would be over. After that it could only learn patterns within the types specified by the diagonalization. Once the new tissue had been diagonalized it would be ready for the implementation of higher level control organized into higher level modes.

At this point it is clear we are once again firmly within the region governed by the biological principles of epigenesis. Any full understanding of the nervous system requires a deep understanding of how these biological principles shape neural tissue. But those principles are outside our purview. Out point is simply that vehicularization is a critical link between the action of the information principles and their implementation in neural tissue. It is our suspicion that a deeper understanding of vehicularization will lead to, or perhaps follow from, an understanding of the particular adjustments of developmental sequences which follow from the ontogenetic recapitulation of phylogeny (Gould 1977).

* * * * *

Note made on 8.29.2000:

I just read a bit on animal navigation that's relevant here. It's from David Gallistel's article in MIT's encyclopedia of cognitive science. Animal navigation is mostly dead reckoning. It's only beacon guided when the animal is close to the target. In Gallistel's words, “Beacon navigation is the following of sensory cues emanating from the goal itself or from its immediate vicinity until the source of the sensory beacon is reached. Widely diverse species of animals locate goals not by reference to the sensory characteristics of the goal of its immediate surroundings but rather by the goals’ position relative to the general framework provided by the mapped terrain.”

What this means is that beacon guidance is a different mode from long-range navigation. It’s a different mode and a different vehicle.

Sunday, September 4, 2022

If you sell 20 copies of your book in a year, you're doing good

Into the Future: From Tyler Cowen through Kim Stanley Robinson to Kisangani 2150

I'm bumping this one to the top both on general principle, but more specifically, for it comments on the future of computing. I note that I wrote this before GPT-3 surprised everyone with it's language prowess, but I don't see that as any reason to modify the views I express here. I also call attention to my remarks about fiction.

* * * * *
 
Once I’d finished Tyler Cowen’s The Great Stagnation I decided to take a look at his next book:
Tyler Cowen. Average Is Over: Powering America Beyond the Age of the Great Stagnation. Dutton (2013).
As the title indicates, the book looks toward the future, something I’m interested in these days. So I poked around, found things of interest, and made some notes.

I found Cowen’s remarks on Freestyle chess most interesting and have some notes on that in the next section. I skip over several chapters to arrive at his treatment of science in Chapter 11, where I spend an inordinate amount of time quarreling with his guesstimates about the capabilities of artificial intelligence. Then I move to the last chapter, “A New Social Contract?”. That brings us to the heart of this post.

For the world Cowen sketches in that last chapter seems roughly compatible with the one Kim Stanley Robinson created in New York 2140, which, as far as I can tell, takes place somewhat beyond the (unspecified) time horizon Cowen has in Average Is Over. I dig out an unpublished essay of Cowen’s, “Is a Novel a Model?”, and use it to situate the two books in the same ontological register. I argue, in effect, that New York 2140 takes more or less the world Cowen projects in Average, pushes it through climate change, cranks some parameters up to eleven, and concludes with a replay of the 2008 financial crisis, albeit to a rather different conclusion.

Then we arrive where I’m really going, the “Heart of Deepest Africa”, Kisangani, a commercial city in the center of the Congo Basin. What will life be like in Kisangani in 2150, a decade after the institutional upheaval that ends Robinson’s book? I don’t answer the question, I merely pose it.

Pour yourself some scotch, coffee, kombucha, whatever’s your pleasure. This is going to be a long one.

Freestyle chess and beyond

Cowen introduces Freestyle chess in Chapter 5, “Our Freestyle Future”. Freestyle chess is played by teams that include one or more humans and one or more computers (p. 78):
A series of Freestyle tournaments was held staring in 2005. In the first tournament, grandmasters played, but the winning trophy was taken by ZackS. In a final round, ZackS defeated Russian grandmaster Vladimir Dobrov and his very well rated (2,600+) colleague, who of course worked together with the programs. Who was ZackZ? Two guys from New Hampshire, Steven Cramton and Zackary Stephen, then rated at the relatively low levels of 1,685 and 1,395, respectively. Those ratings would not make them formidable local club players, much less regional champions. But they were the best when it came to aggregating the inputs from different computers. In addition to some formidable hardware, they used the chess software engines Fritz, Shredder, Junior, and Chess Tiger.
Cowen later notes (p. 81):
The top games of Freestyle chess probably are the greatest heights chess has reached, though who actually is to judge? The human-machine pair is better than any human – or any machine – can readily evaluate. No search engine will recognize the paired efforts as being the best available, because the paired strategies are deeper than what the machine alone can properly evaluate.
And that was before AlphaZero entered the picture [1].

Cowen then uses Freestyle chess as his model for man-machine collaboration in the future of work. That’s a reasonable thing to do. But I have a caveat.

Most task environments are ill-defined and open-ended in a way that chess is not. From a purely abstract point of view, and given an appropriate rule for converting a stalemate into a draw, chess, like the much simpler tic-tac-toe, is a finite game. That is, there are only a finite number of games possible. So, in point of abstract theory, one could list all possible games in the tree and label each path according to how it ends (win, lose, or draw). Then to play you simply follow only paths that can lead to a win or, if forced, to a draw. However the number of possible games is so large that this is not a feasible way to play the game, not even for the largest and fastest of computers.

However, while computers have been able to beat any human at chess for over two decades they still lag behind six year olds in language understanding, though they can be deceptive in “conversation”. Linguistic behavior isn’t the crisply delimited task world that chess is and so presents quite different challenges to computational mastery. The so-called “common sense” problem is pretty much where it was dropped with the eclipse of symbolic-AI over a quarter of a century ago. Cowen isn’t mindful of this issue, which bleeds into many task domains, and so tends to over-generalize from Freestyle chess. I don’t think that over-generalization does much harm to is overall argument, but it does lead him to overplay his hand when he discusses the future of science.

A new kind of science?

So let’s skip to Chapter 11, “The End of Average Science” (p. 206):
For at least three reasons, a lot of science will become harder to understand:
  1. In some (not all) scientific areas, problems are becoming more complex and unsusceptible to simple, intuitive, big breakthroughs.
  2. The individual scientific contribution is becoming more specialized, a trend that has been running for centuries and is unlikely to stop.
  3. One day soon, intelligent machines will become formidable researchers in their own right.
On the first, it seems to me we made a good start on that early in the 20th century and relativity and even more so with quantum mechanics. Still, Cowen cites mathematical proofs that run on for pages and pages and take years for other mathematicians to verify. Are we in for more of that? Maybe yes, maybe no, who knows? And maybe, as I’ve been suggesting in connection with cognitive ranks theory [2], fundamentally new scientific languages will precipitate out of the chaos and return us to a regime where intuition can lead to breakthroughs. In the past pre-Copernican astronomy had a horrendously complex model of the relations between the earth, sun, moon, and other plants, with epicycles upon cycles. But then Copernicus suggests we center the model on the sun and Kepler abandoned circular orbits for elliptical ones and the model became at once simpler (fewer parts), but also more sophisticated. This more sophisticated model was so sensitive that observed anomalies in the orbit of Uranus led to the discovery of Neptune.

I agree that specialization is here to stay (#2). As for machines becoming formidable researchers (#3), color me bemused and skeptical. Let’s skip ahead (217-218):
Most current scientific research looks like “human directing computer to aid human doing research,” but we will move closer to “human feeding computer to do its own research” and “human interpreting the research of the computer.” The computer will become more central to the actual work, even to the design of the research program, and the human will become the handmaiden rather than the driver of progress.

An intelligent machine might come up with a new theory of cosmology, and perhaps no human will be able to understand or articulate that theory. Maybe it will refer to non-visualizable dimensions of space or nonintuitive understandings of time. The machine will tell us that the theory makes good predictions, and if nothing else we will be able to use one genius machine to check the predictions of the theory from another genius machine. Still, we, as humans, won’t have a good grasp on what the theory means and even the best scientists will grasp only part of what the genius machine has done.
I sorta’ maybe kinda’ agree with the first paragraph, but stop at the point that it implies that second paragraph.

First, I’ve got a philosophical problem. Here we have an expert in cosmology, the best humankind has to offer. She’s examining this impossible-to-understand theory and the verification of predictions offered by a genius machine. How is she to tell whether she’s examining valid work or high-falutin’ nonsense? If she can’t understand what they’re doing, why should she believe the two genius machines? I’m sure this one can be argued, but I don’t want to attempt it here and now. So let’s just set it aside.

Let’s instead consider a weaker claim, simply that we’ll have genius machines whose capacity to propose theories, in cosmology, evolutionary biology, post-quantum mechanics, whatever, is as good as any human. How likely is that? I don’t know, but I want to see an argument, and I can’t see that Cowen has offered one. Nor can I see what he’d offer beyond, “computers are getting smarter and smarter by the day”.
 
In the opening chapter Cowen mentions Ray Kurzweil, one of the prime proponents of the inevitability of computational superintelligence that surpasses human intelligence. He neither endorses nor denies Kurzweil’s claims on that score, but he’s put them on the table. He goes on to assert (p. 6), “It’s becoming increasingly clear the mechanized intelligence can solve a rapidly expanding repertoire of problems.” OK. He mentions Big Blue’s chess victory over Kasparov in 1997 and Watson conquering Jeopardy! in 2010. Exciting stuff, yes.

And then (p. 7):
We’re on the very of having computer systems that understand the entirety of human “natural language,” a problem that was considered a very tough one only a few years ago. Just talk to Siri on your iPhone and she is likely to understand your voice, give you the right answer, and help you make an appointment.
And so on.

On this one I line up with the kids who think that understanding natural language is still a very tough problem, and, yes, I’m aware of the remarkable progress that’s been made since Cowen published this book only six years ago. But understanding natural language (I don’t know why Cowen used scare quotes there when he should have put them around “understanding”) just gets more and more difficult the more we’re able to get our machines to do. Back in 1976 I co-authored an article with David Hays in which we predicted that the time would come when a computer model would be able to “read” a Shakespeare play in a way that would give us insight into what one’s mind does while reading such a text [3]. We didn’t offer just when this might happen, but in my mind I was thinking 20 years.

Well, nothing like that was available in 1996. Not only that, but the framework in which we’d hazarded that prediction had collapsed. Whoops! We still don’t have machines that can read Shakespeare in an interesting way, nor do I see any on the horizon. And yet you can query Siri and get useful answers on a wide range of topics, something that wasn’t possible in 1976, though the Defense Department spent a lot of money trying to make it happen (and I read those tech reports as they were issued).

What’s going on? There’s a lot I could say, but not here. And certainly others know much of the story better than I do, especially the technical details.