Thursday, February 29, 2024

More fun with a hydraulic press that can generate 300 tons of pressure – crushing nuts into nut cake

Can you press steel screws or bolts and nuts back to solid piece of metal using our new 300 ton hydraulic press? We will find that out  on this experiment!

What I find fascinating is how much time and effort our host – I don't know his name – devotes to creating the tooling to make these little experiments work. He's having fun with this! And why not? They've over 9 million subscribers, so they've got money coming from this operation. Is there enough money to cover the cost of the new press, along with the housing and tooling they've constructed for it? I'd be surprised if it was, though the surprise is a pleasant one.
This shop is a commercial enterprise of some kind, obviously some kind involving custom work using machine tooling, and I would imagine the new press can be used in that business as well. After all, that's what the old press was doing before it was put to use making nonsense videos. Who knew or expected that those videos would become so popular?

Did you know that tulips track the sun across the sky? [phototropism]

Intelligence is a Process

That's the title of the most recent blog post by Arnold Kling.

In the case of artificial intelligence, we have a problem. There is no clear, settled definition of natural intelligence. If we are not sure what the natural thing is, how can we know what the artificial thing ought to be?

In fact, I want to claim that intelligence is not a thing at all. It is an ongoing process. It is like science. You should not think of science as a body of absolute truth. Instead, think of the scientific method as a way of pursuing truth.

One should resist the temptation to think of intelligence as a huge lump of knowledge that an entity possesses. Memorizing the encyclopedia does not constitute intelligence.

Human intelligence is collective. Pretty much everything I know I learned from other people, either directly or indirectly.

Human intelligence is not fixed. We are constantly trying out new ideas. Many of our beliefs are contestable. The science is not settled.

Human intelligence is evolutionary. We want to obtain true beliefs and to discard mistaken beliefs.

Institutions help to guide the evolutionary process. Free speech, open inquiry, and the scientific method for gathering and debating evidence are examples of such institutions. Jonathan Rauch coined the expression The Constitution of Knowledge.

I prefer to think of intelligence as residing in these collective institutions.

There's more at the link.

Wednesday, February 28, 2024

10,000 Posts at New Savanna!

Power, Persuasion, and Influence in The Valley

Henry Farrell has a new post at Crooked Timber: Dr. Pangloss’s Panopticon (Feb. 27, 2024). He's replying to Noah Smith's negative review of Acemoglu and Johnson, Power and Progress

On power:

What Acemoglu and Robinson are saying is something quite different than what Noah depicts them as saying. For sure, they acknowledge that persuasion has some stochasticity. But they stress that it is not a series of haphazard accidents. Instead, under their argument, there are some kinds of people who are systematically more likely to succeed in getting their views listened to than other kinds of people. This asymmetry can reasonably be considered to be an asymmetry of power.

Under this definition, power is a kind of social influence. Again, it is completely true that it is extremely difficult to isolate social influence from other factors, proving that social influence absolutely caused this, that, or the other thing. But if Noah himself does not believe in the importance and value of social influence, then why does he get up in the morning and fire up his keyboard to go out and influence people, and why do people support his living by reading him?

I imagine Noah would concede that social influence is a real thing! And if he were actually put to it, I think that he would also have to agree to a very plausible corollary: that on average he, Noah Smith, exerts more social influence than the modal punter argufying on the Internet. Lots of people pay to receive his newsletter; lots of other people receive it for free. That means that he is, under a very reasonable definition, more powerful than those other people. He is, on average, more capable of persuading large numbers of people of his beliefs than the modal reply-guy is going to be.

This understanding of power is neither purely semantic nor empirically useless. Again, it may be really difficult to prove that Noah’s social influence has specific causal consequences in a specific instance. But the counter-hypothesis – that Noah’s ability to change minds, given his umpteen followers, is the same as the modal Twitter reply guy – is absurd. Occasionally, random people on the Internet can be temporarily enormously influential. Sometimes, super prominent people aren’t particularly successful at getting their ideas to spread. But on average, the latter kind of people will have more influence than the former. We can reasonably anticipate that people with lots of clout (whether measured by absolute numbers of followers, numbers of elite followers, bridging position between sparsely connected communities or whatever – there are different, plausible measures of influence and lively empirical debates about which matters when) will on average be substantially more influential than those with little or none. This means, for example, that it will be very difficult for ideas or beliefs to spread if they are disliked by the highly connected elite.

Now in fairness to Noah, Acemoglu and Johnson don’t help their case by using a wishy-washy seeming term like “persuasion.” But if you think about “persuasion” as some combination of “social influence” and “agenda control,” you will get the empirical point they are trying to make.

Core claims:

Acemoglu and Johnson’s core claims, as I read them are:

  1. That the debate about technology is dominated by techno-optimists [they actually write this before Andreessen’s ludicrous “techno-optimist manifesto” but they anticipate all its major points].
  2. That this dominance can be traced back to the social influence and agenda setting power of a narrow elite of mostly very rich tech people, who have a lot of skin in the game.
  3. That their influence, if left unchecked, will lead to a trajectory of technological development in which aforementioned very rich tech people likely get even richer, but where things become increasingly not-so-great for everyone else.
  4. That the best way to find better and different technology trajectories, is to build on more diverse perspectives, opinions and interests than those of the self-appointed tech elite, through democracy and countervailing power.

Since I more or less endorse all these claims (I would slightly qualify Claim 1 to emphasize mutually reinforcing pathologies of tech optimism and tech pessimism), I think that Power and Progress is a really good book, in ways that you won’t understand if you just relied on Noah’s summary of it (I note that this book and my own with Abe Newman are both shortlisted for a very nice prize, but that is neither here nor there in my opinion of it). I haven’t read another book that lays out this broad line of argument so clearly or so well. I haven’t read another book that lays out this broad line of argument so clearly or so well. And it is a very important line of argument that is mostly missing from current debates. Noah speculates that the book hasn’t gotten much attention because it is lost amidst the multitudes of tech pessimistic accounts. My speculation is that it has gotten less attention than it deserves because reviewers and readers don’t know quite how to categorize it, given that it approaches the issues from an unexpected slant.

The panopticon:

The panopticon may indeed have efficiency benefits. People can get away with far less slacking, if it works as advertised. But it also comes with profound costs to human freedom. And the technologies that are at the heart of the book’s argument – machine learning and related algorithms – bear a strong and unfortunate resemblance to Bentham’s panopticon. They too, enable automated surveillance at scale, perhaps making hierarchy and intrusive surveillance much, much easier and cheaper than they used to be. As Acemoglu and Johnson note:

The situation is similarly dire for workers when new technologies focus on surveillance, as Jeremy Bentham’s panopticon intended. Better monitoring of workers may lead to some small improvements in productivity, but its main function is to extract more effort from workers and sometimes also reduce their pay

This is, I think, why Acemoglu and Johnson worry that machine learning might immiserate billions, another claim that Noah finds puzzling. Acemoglu and Johnson fear that it will not only remake the bargain between capital and labour, but radically empower authoritarians (I think they are partly wrong on this, but that authoritarian machine learning could instead lead to a different class of disasters: pick yer poison).

The post is longish, but excellent. I made the following comment, about power in Silicon Valley:

Excellent, Henry, at least as far as I got. I made it about half-way though before I just had to make a comment. I'm thinking about the piece you and Cosma Shalizi did about the culture of Doomerism, which is very much a Silicon Valley phenomenon. And is also very relevant to any discussion of ideas, influence, persuasion, and POWER. I was shocked when such a mainstream magazine as Time ran a (crazy-ass) op-ed by Eliezer Yudkowsky.

I am reasonably familiar with his work. I have several times attempted to read a long piece he published in 2007 about Logical Organization in General Intelligence. I've been unable to finish it. Why? Because it's not very good. It's the kind of thing a really bright and creative sophomore does when they've read a lot of stuff and decide to write it up. You read it, think the guy's bright, if he gets some discipline, he could do some very good work. Well, 2007 was awhile ago, but as far as I can tell, he still doesn't have much intellectual discipline and certainly doesn't have deep insight into current AI or into human intelligent. But Time Magazine gave him scarce space in their widely read magazine.

That's power. Now, as far as I know, he's not again been able to place his ideas in such a venue. But even once is pretty damn good.

How'd that come about? Well there's a story, one I don't know in detail. But the story certainly involves money from Silicon Valley billionaires. He's been funded by Elon Musk and, I believe, by Peter Thiel (who's since become disillusioned with some of those folks). There's a lot of money coming into and through the world centered around LessWrong (which, BTW, has the best community-style user interface I've seen) from tech billionaires.

On technological trajectories, Acemoglu is one of a team of writers of a 2021 report out of Harvard, How AI Fails Us. Here's the abstract:

The dominant vision of artificle intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. This “actually existing AI” vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous, optimizing for artificle metrics of human replication rather than for systemic augmentation, and tending to concentrate power, resources, and decision-making in an engineering elite. Alternative visions based on participating in and augmenting human creativity and cooperation have a long history and underlie many celebrated digital technologies such as personal computers and the internet. Researchers and funders should redirect focus from centralized autonomous general intelligence to a plurality of established and emerging approaches that extend cooperative and augmentative traditions as seen in successes such as Taiwan’s digital democracy project and collective intelligence platforms like Wikipedia. We conclude with a concrete set of recommendations and a survey of alternative traditions.

That is much better than the view that dominates AI development today. Moreover, I believe it to be more technically feasible. But, despite the Harvard imprimatur, it doesn't have nearly as much power behind it as the Silicon Valley view of "a future of large-scale autonomous systems outperforming humans in an increasing range of fields."

“NATURALIST” criticism, NOT “cognitive,” NOT “Darwinian” – A Quasi-Manifesto

I've posted a new paper to the web, title above, links and preface below:
Research Gate:

Prefatory Note: What’s in a Name?

This was originally published in The Valve, now defunct, on March 31, 2010. It is an informal manifesto in a quasi-conversational style – pseudo-Socratic? – for my work in literary criticism. Acknowledging that literary criticism has long become a game in which one must have a label for one’s approach, I chose the term “naturalist.” After acknowledging that the term is problematic in a humanities context. I go on to explain why I’ve chosen it to designate my work and, in particular, to differentiate it from “cognitive” criticism and “literary Darwinism,” with which it has superficial affinities.

I could link a lot more into this piece now, but I won't. It's a decent indicator of the directions I’ve been pursuing.

What do these 2 things have in common?

Tuesday, February 27, 2024

Evo, a genomic foundation model that learns across the fundamental languages of biology: DNA, RNA, and proteins.

There are more Xerpts in the thread.

 H/t Tyler Cowen.

Has the pandemic “broken the back” of 9 to 5?

John Quiggin at Crooked Timber, Back to the office: a solution in search of a problem (2.27.24):

Once the lockdown phase of the pandemic was over, workers were in no hurry to return to the office. The benefits of shorter commuting times and the flexibility to handle family responsibilities were obvious, while adverse impacts on productivity, if any, were hard to discern.

Sceptics argued that working from home, though fine for current employees, would pose major difficulties for the “onboarding” of new staff. Four years into the new era, though, around half of all workers are in jobs they started after the pandemic began. Far from lamenting the lack of office camaraderie and mentorship, these new hires are among the most resistant to the removal of a working condition they have taken for granted since the start.

Nevertheless, chief executives have issued an almost daily drumbeat of demands for a return to five-day office attendance and threatened dire consequences for those who don’t comply. Although these threats sometimes appear to have an effect, workers generally stop complying. As long as they are still doing their jobs, their immediate managers have little incentive to discipline them, especially as the most capable workers are often the most resistant to close supervision. Three days of office attendance a week has become the new normal for large parts of the workforce, and attempts to change this reality are proving largely fruitless.

The upshot is that attendance rates have barely changed after more than two years of back-to-the-office announcements. The Kastle Systems Back to Work Barometer, a weekly measure of US office attendance as a percentage of February 2020 levels, largely kept within the narrow range of 46 to 50 per cent over the course of 2023.

This fact is finally sinking in. Sandwiched between two pieces about back-to-the-office pushes by diehard employers, the Australian Financial Review recently ran up the white flag with a piece headlined “Return to Office Stalls as Companies Give Up on Five Days a Week.”

This trend, significant in itself, also marks a change in power relations between managers and workers. Behind all the talk about “water cooler conversations” and “synergies,” the real reason for demanding the physical presence of workers is that it makes it easier for managers to exercise authority. The failure of “back to the office” prefigures a major realignment of power relationships at work.

Conversely, the success of working from home in the face of dire predictions undermines one of the key foundations of the “right to manage,” namely the assumption that managers have a better understanding of the organisations they head than do the people who work in them. Despite a vast literature on leadership, the capacity of managers to lead their workers in their preferred direction has proved very limited.

I would add that we all have different biorhythms and are alive to various kinds of tasks at different times of the day. I’m a morning person and tend to do my most intellectually demanding work in the morning. Can I do intellectual work in the afternoon, or even the evening? Sure, but I’m best in the morning. And there are times, when I’m on a hot streak, that I’ll do excellent work in a one or even two-hour stretch in the middle of the night. When you work from home, as I’ve been able to do most of my adult life, you have more control over when you do what. And that likely increases one’s productivity rather than detracts from it.

The Fastest Way to AGI: LLMs + Tree Search [Neuro-symbolic]

Demis Hassabis (Google DeepMind CEO)

About 0:23:

....we've got to carry on
um basically making the more and more
accurate predictors of the world so in
effect making them more more reliable
World models that's clearly a necessary
but I would say probably not sufficient
component of an AGI system um and then
on top of that I would you know we're
working on things like Alpha zero like
planning mechanisms on top that make use
of that model in order to make concrete
plans to achieve certain goals in the
world um and and perhaps sort of chain
you know chain thought together or lines
of reasoning together and maybe use
search to kind of explore massive spaces
of possibility I think that's kind of
missing from our current large models...

About 2:15

my betting would be is that um you know
the final AGI system will have these
large multimodels um models as part of
the the overall solution but probably U
won't be enough on their own you will
need this additional planning search on

Purple Person

AI and intellectual integration @3QD

I have a new article up at 3 Quarks Daily:

Western Metaphysics is Imploding. Will We Raise a Phoenix from The Ashes? [Catalytic AI],

It is about philosophy, though not philosophy as it is currently practiced as an academic discipline. I like it. In fact I like it a lot.

Why? Because it’s built on a number of articles I’d previously published in 3QD as well as other work I’d published about AI over the past year, ChatGPT in particular. When I finally posted it on Sunday afternoon, it felt good, really good. “Man, I’ve got something here,” said I to myself.

When I got up early Monday morning, so early one might call it late Sunday night, I looked at the article and started glancing through it. “Holy crap!” thought I to myself, “when people start reading this they’re going to think they’ve landed in one of those classic New Yorker essays that wander all over the place before getting to the point, if there is one. What happened?”

Those are two very different reactions: “I’ve got something here” vs. “Holy crap!” Conclusion: I’ve got some work to do.

I might as well begin here and now.

Meaning, intention, and AI

One of my friends remarked, “you are too smart for me.” I took that to be a polite and diplomatic way of saying that he figured there must be something there but he sure couldn’t find it. How’d I get from his remark to that interpretation? I can tell you want it didn’t involve: conscious, deliberate thought. I simply knew that’s what he was saying. I intuited the intention behind my friend’s words, an intention that I’ve subsequently verified.

Intentionality – closely related to but not quite the same as intention – is at the heart the classic argument against AI. As far as I know that argument was first articulated by Hubert Dreyfus back in 1969 or 71’, in that time frame, but is probably best-known from John Searle’s Chinese Room argument, which first appeared in 1980 in Behavioral and Brain Sciences. That argument has been refitted for the current era, perhaps most visibly by Emily Bender, who coined the phrase “stochastic parrot” to characterize the actions of Large Language Models (LLMs).

I accept that argument. The problem is, however, that it’s one thing to have made that argument at a time when AI systems responded to human input in a relatively simple and straightforward way, which was the case when Dreyfus and Searle made their arguments. Back then the argument supplied a fairly satisfying – at least to some people – account of why AI won’t work. Now, in the face of ChatGPT’s much more impressive performance, you are asking a lot more from that argument, more, I’ve argued elsewhere, more than it can reasonably deliver.

The issue here is the gap between our first-person experience of the machine and what the machine is actually doing. Back in Searle’s time the philosophical concept of intentionality was able to account for that gap, at least for some of those familiar with the concept. In the case of ChatGPT the nature of that gap is quite different. To a first approximation, our first-person experience is that we’re conversing with a person that has a strange name, ChatGPT. Some people have strange names and stilted discourse is not uncommon. If ChatGPT is in fact a person, then there is no gap to account for. We know, however, that ChatGPT is NOT a person. It’s a machine.

We are now faced with a HUGE gap. What’s the machine doing? We don’t know. The people who built these systems can’t tell us what they’re doing – a point I make in the first section of the article after the introduction, “Views about Machine Learning and Large Language Models.” They can’t even tell themselves what the machine is doing much less craft a simplified account, based on metaphors and analogies, for the rest of us. They know how the system builds an LLM and how it accesses the LLM, but they don’t know what’s going on inside the LLM itself, with its billions and billions of parameters.

That’s one thing. This business about bridging the game between first-person experience and what’s really going on, that’s a second thing. That’s a view of philosophy articulated by Peter Godfrey-Smith, which I discuss in the second part of the article, “Philosophy’s integrative role.” “Integrative” is the word he uses for that function that philosophy plays in the larger intellectual discourse. His argument is that philosophy has largely abandoned that role and that it needs to get back to it. My argument is that nowhere is that more important than in the case of artificial intelligence.

I spend the rest of the article making that point. First, I digress into a section entitled, “Tyler Cowen, Virtuoso Infovore,” where I also discuss Richard Macksey. Cowen has recently argued, in effect, that the very greatest economists, in addition to their specialized work within economics, have also performed that integrative role on behalf of the larger intellectual pubic. Then I get to the argument I’ve been chasing all along, “Artificial Intelligence as a catalyst for intellectual integration,” which you are welcome to read.

But I want to get back to my friend’s response to my article and say a few words about that.

Intention, intuition and deduction in “intelligence”

How did my friend arrive at that statement he made to me? I don’t know. But I’m guessing it was mostly by intuition rather than explicit deductive reasoning. He’d read the article and was puzzled, conjured up our relationship and, viola! out comes the statement, “you are too smart for me.” Simple as pie.

Could he have arrived at that statement through a process of rational deduction? Possibly. How might that have gone?

ONE: FACT: The article doesn’t make sense to me.

TWO: There are three possibilities: 1) It’s nonsense, or at least deeply flawed. 2) It’s fine but too abstract for me. 3) Some combination of the first two.

THREE: PREMIS: Bill’s a smart guy. CONCLUSION: It’s probably 2 or 3. What do I say?

FOUR: FACT: Bill’s a friend. THEREFORE: I’ll give him the benefit of the doubt and base my response on 2.

FIVE: PREMIS: The article is too abstract for me. PREMIS: I’m smart. FACT: Bill made the argument. THEREFORE: Bill must be very smart...

SIX: Here’s what I’ll say: “ are too smart for me.”

As logical arguments go, it’s rather rickety. I would hate to have to formulate it in terms of formal logic. But you get the idea. Logically, it’s a tangled mess.

In the annoying matter of text books, I leave it as an exercise for the reader to make a similar argument about how I knew what my diplomatic friend was telling me.

I do not believe that ChatGPT is capable of anything like this, though, given that there’s been tons of fiction in its training corpus, containing millions and millions of lines of dialog, it might provide a passable simulacrum in this or that case. The situation will not change when the underling LLM has more parameters and has been trained on a larger dataset, assuming there’s one to be had. The limitation is inherent in the technology.

Critics like Gary Marcus argue that LLMs need to be augmented by the capacity for symbolic reasoning if they are to be truly intelligent, whatever that is. I agree. Symbolic reasoning will get you a lot, but not a whole hell-of-a-lot in the situation I’ve been discussing here. That pseudo-deduction I just went through, symbolic reasoning will get you the capacity to do that, but in even more detail.

On that basis I don’t expect that AI and ML systems will be able to handle the nuances of human interaction in the foreseeable future, if ever. We’ve come a long way, and we have a long way to go.

Time for something different! Pizza and Fanta Orange, Hegelian style: Thesis, Antithesis, Synthesis

The Adventures of Task-Force Tim: An Allegory About the Current Effort to Regulate AI

John Sowa is an Old School AI researcher who spent his career at IBM until retired. Since then he’s pursued various projects in AI (see this video, Evaluating and Reasoning with and about GPT). He’s got an extensive website that includes some materials from the old days, including some remarks about a project that IBM started in 1971. It didn’t work out. In the process, one member of Sowa’s department, Bob Bacon, did a comic illustrating the problems that inevitably came up.

Sowa has those on his website where anyone can see them. So I assume he has no problem with me putting them in this post. I’m thinking of them as an allegory on current efforts to regulate research in artificial intelligence and machine learning. [Click on images to enlarge them.]

Monday, February 26, 2024

Marc Andreessen on His Intellectual Journey the Past Ten Years

From the YouTube page:

Marc Andreessen of a16z sits with Erik Torenberg to go deeper into his intellectual and political journey, and his quest to find out how the world works.

(00:00) Intro
(03:25) How much has Marc changed vs the world changed?
(06:55) How much do ideas matter? Who drives society — the elite or the masses?
(09:56) People respond to interests more than ideas
(12:08) Mental models for the left and the right
(14:45) Sponsors (Secureframe | Mercury | MarketerHire)
(16:53) The road to hell is paved with good intentions
(18:40) Master morality and slave morality
(23:20) Unpacking Elon’s quote “Wokeness is the mind-virus”
(26:00) Is classical liberalism sustainable and how it leads to wokeness
(35:00) James Burnham’s worldview
(42:50) How the left captured the institutions
(45:20) Elon as the return to entrepreneurial capitalism
(48:03) The experiments Elon is running
(54:50) The billionaire mindset toward politics
(57:30) We live in an oligarchy, not a democracy
(1:06:00) Larry Page is the contrarian billionaire
(1:07:30) Effective Altruism’s blind spots
(1:10:36) Effective altruists think they can play god
(1:11:16) SBF’s roll-the-dice philosophy
(1:13:23) Aristocratic vs Meritocratic elite
(1:21:48) Elites are insulated from the consequences of their policies
(1:23:53) Why global governance is a nerd trap
(1:27:16) Global governance is anti-diversity
(1:30:42) Tech people are politically homeless
(1:31:37) Elites can’t be removed, they can only be replaced
(1:35:02) Advice for counter-elites
(1:47:21) Reasons to be optimistic

Andreesen makes an interesting observation at about 46:10: Venture capitalism is, in effect, the return of bourgeois capitalism into managerial capitalism. There are interesting observations scattered throughout, including Adreesen's reaction on first being invited into, shall we say, the Davos crowd.

Razor wire [purple]

The energy costs of generative AI are currently large enough to impact the environment

David Derreby, The Growing Environmental Footprint Of Generative AI, Undark, Feb, 20, 24.

One of the things that David Hays impressed on me when I studied with him back in the 1970s is that computing, real computing, is a physical process. It requires physical resources and extends out in time. The emergence of generative AI makes that abundantly clear. From the article:

AI can run on many devices — the simple AI that autocorrects text messages will run on a smartphone. But the kind of AI people most want to use is too big for most personal devices, Dodge said. “The models that are able to write a poem for you, or draft an email, those are very large,” he said. “Size is vital for them to have those capabilities.”

Big AIs need to run immense numbers of calculations very quickly, usually on specialized Graphical Processing Units — processors originally designed for intense computation to render graphics on computer screens. Compared to other chips, GPUs are more energy-efficient for AI, and they’re most efficient when they’re run in large “cloud data centers” — specialized buildings full of computers equipped with those chips. The larger the data center, the more energy efficient it can be. Improvements in AI’s energy efficiency in recent years are partly due to the construction of more “hyperscale data centers,” which contain many more computers and can quickly scale up. Where a typical cloud data center occupies about 100,000 square feet, a hyperscale center can be 1 or even 2 million square feet.

Estimates of the number of cloud data centers worldwide range from around 9,000 to nearly 11,000. More are under construction. The International Energy Agency, or IEA, projects that data centers’ electricity consumption in 2026 will be double that of 2022 — 1,000 terawatts, roughly equivalent to Japan’s current total consumption.

However, as an illustration of one problem with the way AI impacts are measured, that IEA estimate includes all data center activity, which extends beyond AI to many aspects of modern life. Running Amazon’s store interface, serving up Apple TV’s videos, storing millions of people’s emails on Gmail, and “mining” Bitcoin are also performed by data centers. (Other IEA reports exclude crypto operations, but still lump all other data-center activity together.)


Another complication is the fact that AI, unlike Bitcoin mining or online shopping, can be used to reduce humanity’s impacts. AI can improve climate models, find more efficient ways to make digital tech, reduce waste in transport, and otherwise cut carbon and water use. One estimate, for example, found that AI-run smart homes could reduce households’ CO2 consumption by up to 40 percent. And a recent Google project found that an AI fast-crunching atmospheric data can guide airline pilots to flight paths that will leave the fewest contrails.

Because contrails create more than a third of commercial aviation’s contribution to global warming, “if the whole aviation industry took advantage of this single A.I. breakthrough,” says Dave Patterson, a computer-science professor emeritus at UC Berkeley and a Google researcher, “this single discovery would save more CO₂e (CO₂ and other greenhouse gases) than the CO₂e from all A.I. in 2020.”

Patterson’s analysis predicts that AI’s carbon footprint will soon plateau and then begin to shrink, thanks to improvements in the efficiency with which AI software and hardware use energy. One reflection of that efficiency improvement: as AI usage has increased since 2019, its percentage of Google data-center energy use has held at less than 15 percent. And while global internet traffic has increased more than twentyfold since 2010, the share of the world’s electricity used by data centers and networks increased far less, according to the IEA.

However, data about improving efficiency doesn’t convince some skeptics, who cite a social phenomenon called “Jevons paradox”: Making a resource less costly sometimes increases its consumption in the long run.

There's more at the link.

Foreground filigree

NYC is the world's capital of endangered languages.

Alex Carp, The Endangered Languages of New York, NYTimes, Feb. 22, 2024.

Most people think of endangered languages as far-flung or exotic, the opposite of cosmopolitan. “You go to some distant mountain or island, and you collect stories,” the linguist Ross Perlin says, describing a typical view of how such languages are studied. But of the 700 or so speakers of Seke, most of whom can be found in a cluster of villages in Nepal, more than 150 have lived in or around two apartment buildings in Brooklyn. Bishnupriya Manipuri, a minority language of Bangladesh and India, has become a minority language of Queens.

All told, there are more endangered languages in and around New York City than have ever existed anywhere else, says Perlin, who has spent 11 years trying to document them. And because most of the world’s languages are on a path to disappear within the next century, there will likely never be this many in any single place again.

Perlin's book: Language City: The Fight to Preserve Endangered Mother Tongues in New York (2024).

With Daniel Kaufman, also a linguist, Perlin directs the Endangered Language Alliance, in Manhattan. When E.L.A. was founded, in 2010, Perlin lived in the Chinese Himalayas, where he studied Trung, a language with no standard writing system, dictionary or codified grammar. (His work helped establish all three.) He spent most of his time in the valley where the largest group of remaining speakers lived; the only road in or out was impassable in winter. [...]

Since their project began, Perlin and Kaufman have located speakers of more than 700 languages. Of those languages, at least 150 are listed as under significant threat in at least one of three major databases for the field. Perlin and Kaufman consider that figure to be conservative, and Perlin estimates that more than half of the languages they documented may be endangered.

There's more at the link.

Sunday, February 25, 2024

The human challenge of a mission to Mars

Nathaniel Rich, Can Humans Endure the Psychological Torment of Mars?, The New York Times Magazine, Feb. 25, 2024. From the article:

That people will travel to Mars, and soon, is a widely accepted conviction within NASA. The target date for the initial human mission has drifted slightly — in a 2018 report commissioned by Congress, NASA estimated that the first human beings would land on Mars “no later than the late 2020s” — but the certainty has not wavered, even if technical hurdles remain. Rachel McCauley, until recently the acting deputy director of NASA’s Mars campaign, had, as of July, a punch list of 800 problems that must be solved before the first human mission launches. Many of these concern the mechanical difficulties of transporting people to a planet that is never closer than 33.9 million miles away; keeping them alive on poisonous soil in unbreathable air, bombarded by solar radiation and galactic cosmic rays, without access to immediate communication; and returning them safely to Earth, more than a year and half later. Many other problems involve technical details so arcane that McCauley wouldn’t even know how to begin explaining them to a well-intentioned journalist lacking an advanced engineering degree. But McCauley does not doubt that NASA will overcome these challenges. What NASA does not yet know — what nobody can know — is whether humanity can overcome the psychological torment of Martian life.

Enter CHAPEA. Instead of asking questions about aeroshell sensor design and terrain-relative navigation, it promised to ask questions about people. For 378 days, four ordinary people would enact, as closely as possible, the lives of Martian colonists, receiving directives, feedback and near-total surveillance from mission control. They would eat astronaut food, conduct basic experiments, perform maintenance duties, respond to endless surveys and enjoy highly structured down time. This level of extreme verisimilitude is necessary to ensure that the experiment accurately determines whether human beings can thrive while living millions of miles from everybody they’ve ever known.

There's much more at the link.

Here's that rainy day – follow the red umbrella

Entropy and Self-Organization on the Table Top [Emergence]

I'm bumping this to the top of the queue: 1) on general principle, and 2)  because I was going to include some of this material in a paper that goes up on 3 Quarks Daily, but then I decided not to use it. So here it goes. Moreover (3), it's a good antidote to the all too frequent and casual use of the concept of "emergence," which more often than not is used as a sophisticated synonym for "magic." We don't really know what's going on so we'll call it emergence. Well, in this case, the appearance of convection cells, we know what's happening:

The tumbler was sitting on a window sill during the morning and mid-day on a sunny day. Sunlight came through the window and heated the water, just a bit. But the black carbon particles in the ink absorbed energy faster than the water molecules, making them warmer than the water in which they were immersed. That’s what supported the formation of those convection cells, which began dissipating within an hour after they had formed. No violation of the Second Law of Thermodynamics. You can download this discussion as a PDF from this link.

With a Note on What to do When You are Fascinated by Technical Concepts but Lack the Math: Call the Plumber!


Matter is not given. In the present-day view it has to be constructed out of a more fundamental concept in terms of quantum fields. In this construction of matter, thermodynamic concepts (irreversibility, entropy) have a role to play. 
–Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, 1984

I’ve been thinking a lot about entropy lately.

It is, of course, one of the foundational concepts of modern thought, haunting our dreams with the prospect of the universe grinding to a halt in heat death, but also animating our hope of understanding how life arose in the universe. In a Latourian context one might even speculate that entropy is the concept that, more than any other (except perhaps biological evolution, with which it has become richly intertwined), gives the lie to the Modern’s conceit that they are here and nature is somewhere over there, separated from one another by a sharp line of clear and distinct ideas. For the concept of entropy, unlike relativity and quantum mechanics, has arisen from deep within the world of classical physics.

According to the Wikipedia the term was coined in 1865 by Rudolf Clausius, but the work leading to the concept originated earlier in the century with the research of Lazare Carnot, a mathematician whose
1803 paper Fundamental Principles of Equilibrium and Movement proposed that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity. In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines whenever "caloric", or what is now known as heat, falls through a temperature difference, work or motive power can be produced from the actions of the "fall of caloric" between a hot and cold body.
There you have it, the machine, a mechanical device with moving parts. We have Newtonian mechanics with its three laws of motion and the grand suggestion that the universe works like a clock, a vast device of many parts all ticking away in perfect order, except when they don’t. And there’s La Mettrie’s 1748 treatise, Man a Machine.

Oh! how easy our intellectual life would have become if only the universe were nothing but a clock and we but little tick-tocks within it.

But it is not, nor are we. The mechanistic vision ground to a halt in the analysis of fire and we became but especially clever monkeys through Darwin’s elucidation of a pattern he traced though the geological, paleontological, botanical and zoological records.

Chasing Molecules

Though my interest in entropy is long-standing, my recent thoughts have been occasioned by various and numerous remarks the philosopher Levi Bryant has made at Larval Subjects, his blog. The post Entropy and Me is a representative example. Or, consider this passage from his book, The Democracy of Objects (pp. 227-228):
Entropy refers to the degree of disorder within a system. Suppose you have a tightly closed glass box and somehow introduce a gas into it. During the initial phases following the introduction of the gas into the system, the gas will be characterized by a high degree of order or a low degree of entropy. This is so because the particles of gas will be localized in one or the other region of the box. However, as time passes, the degree of disorder and entropy within the system will increase as the gas becomes evenly distributed throughout the box. In this respect, entropy is a measure of probability. If the earlier phases of the gas distribution indicate a lower degree of entropy than the later stages, then this is because in the earlier phases there is a lower degree of probability that the gas will be localized in any one place in the box. As time passes, the probability of finding gas particles located evenly throughout the box increases and we subsequently conclude that the degree of entropy has increased.
This seemed a bit, well, “off” to me. For one thing Bryant doesn’t say just how the gas gets introduced into the box. Surely he doesn’t mean that it gets magically whisked there through a Star Trekkian transporter. But what DOES he mean?

Well, he probably meant something like poking a small hole somewhere in the box and letting the air rush in. So that’s what I did. Not physically, of course, as I have no convenient source of high-vacuum boxes, but in my imagination.

I began imagining lots and lots of tiny tiny air molecules going in through the hole. Does that first cohort march in formation like a highly trained marching band or drill team, or do they twist and tumble every which way, pushed by the molecules behind them, and those behind them, and so forth? How fast do they move? Who’s the first to make it to the other side? And how do you measure their positions?

It seemed reasonable to think, as Bryant more or less stated (except, remember, he said nothing about a hole), that they’d be bunched up near the hole at the beginning and that, at the end, they’d be scattered evenly throughout the box. But how’d they get from one state to the other? Getting from New Jersey to New York is easy, there’s the Holland Tunnel, the Verrazano-Narrows Bridge, and so forth. But the kind of states we’re talking about aren’t geographical regions and moving from one to the other is not like getting in a car, turning the key (or pushing the button) and driving away.

And, by the way, just what does “evenly” mean? It might mean that they’re at the vertices of a cubic lattice, or some other regular structure, but I suspect that that’s not what Bryant meant. If not THAT, though, then just what? Perhaps he was, in his imagination, dividing the box into lots of tiny cubes. We then count the number of molecules in each cube. It doesn’t matter just where they are in the cube, just so they’re inside it. Some place. And when we’ve done our count we find that there’s approximately the same number in each imaginary cube.

Now we’re getting somewhere, says I to myself, we’re making progress.

But no, we’re not, we’re just getting deeper and deeper into the quicksand. What’s the size of our imaginary cubes? Does it matter? And those molecules, they’re moving, right, always moving. Since we can’t possibly examine all these imaginary cubes at one time, but have to look at one after another, how do we keep those molecules inside their proper imaginary cubes? And, since the little critters are identical to one another, how can we be sure that some of them aren’t sneaking about from cube to cube just to mess up our count?

Now, you might say, this is all nonsense, this stuff about imaginary cubes and pesky molecules who are unwilling to sit still for the count. Well, yes, you’re right, it’s nonsense in a way. But, if Bryant’s talk about order and probability is to have any substantive meaning, then we really do have to have some way of locating and counting those molecules. We need some way of taking measurements and my imaginings, some of them anyhow, are aimed at the informal notion of evenness. If we're going to measure it, well, what does it mean? Without measurements we’re just talking gibberish.

Still, it’s clear that something isn’t working. My thinking was at an impasse, that’s clear. I’m in over my head. What to do?

Call the Plumber

My plumber is Tim Perper. Though he’s not a plumber, he’s not even a physicist. He was trained as a molecular biologist and geneticist, worked in industry for a bit, worked in academia for a bit, and then decided that he was really more interested in human courtship than in complex molecules. So he spent a couple years hanging out in bars, night clubs, church socials and such and wrote down what he saw people doing—all courtesy of the Guggenheim Foundation. He wrote that work up in a book, Sex Signals (1985), that work and, of course, a lot more, including Ovid and Durkheim.

Do I glimpse the collapse of Western metaphysics thru the trees?

9,981 and counting. How long before New Savanna has 10,000 posts?

I have published 9,981 posts at New Savanna since it first went live in April of 2010.

Some of those are long-form articles I have written, 1000 to three or four thousand words or more. Many are shorter, 100 to 1000 words. Many are excerpts from articles published elsewhere. Some are little more than a link or three.

Then we have articles that contain tweets. I've recently added one or two Instagram posts.

Some articles contain one or more videos, of various types, music and podcasts are the most frequent kinds. 

Finally, the photographs of various subjects: graffiti, Hoboken, Jersey City, Manhattan, the Hudson River, food, flowers, trees, grass, and so forth.

How long will it take me to post ten thousand more?

Saturday, February 24, 2024

Morales, Birds of Paradise | Indiana University Trumpet Ensemble

When I was in high school in the 1960s there were 15 or 17 trumpet players, I forget exactly how many, in the marching and concert bands. Only two of them were female. This trumpet ensemble from Indiana University, which has an excellent brass program, has six young women. The times they have changed.

Why snow days are good for kids

Michael Venutolo-Mantovani, What kids lose without snow days, Vox, Feb. 24, 2024.

With the proliferation of virtual learning, do kids even get to enjoy the magic of an unexpected snow day anymore? Are true snow days an endangered species?

Earlier this month, nearly 1 million students in New York City’s public school system learned that their schools would remain open, despite the threat of a predicted half-foot of snowfall (in the end, estimates ended up being a bit high, with John F. Kennedy International Airport reporting just over 4 inches of accumulation). Classes would be held virtually, they were told — even though there was a network outage that prevented smooth proceedings. There was plenty of pushback, even including some reports of teachers telling parents to ignore the edict from Mayor Eric Adams.

But the point remained: Access to virtual learning was robbing kids of one of the premier highlights of youth (at least in those geographical sweet spots like New Jersey, where snow falls sometimes in the winter).

Adams’s comments that New York City had to “minimize how many days our children are just sitting at home making snowmen,” completely disregarded the social needs of a generation of overworked and overstressed children.

Because there’s nothing wrong with a day or two spent sitting at home, making snowmen. At least not according to Melanie Killen, a professor of human development and quantitative methodology at the University of Maryland.

“Snow days need to be sledding days,” she said. Snow days offer “a different kind of learning ... an important kind of learning.” [...]

“I wouldn’t necessarily call it a ‘brain break,’” Killen said. “Kids are out there using their brains in different ways on snow days. It’s a break from the traditional teacher-children dissemination, which kids need.”

Killen likened the typical snow day of the past to something like an extended recess, highlighting how during that less structured playtime, kids continue to learn. She added that almost everything about playing in the snow offers some sort of quantifiable lesson about the world.

YES! There's more at the link.

The use of ChatGPT in data analysis [crushing things]

I've been following the Hydraulic Press Channel for the last several years, I suppose, to satisfy my inner 8-year-old's interest in crushing things, any thing, until they break. The channel just purchased a new and bigger press to plus (to borrow a term from Walt Disney) the level of concentrated nonsense they can put on display.

Between 0:23 and 0:43 we learn how the new equipment allows them to gather data on how the machine is performing. From 0:48 to 1:07 we see them using ChatGPT to graph the data. Mirabile dictu!

From the YouTube page:

In this video we will test our 300 ton hydraulic press with data logging system and sensors! Press can calculate force being generated through pressure sensors, so no need for force sensor / load cell ! Then it can measure position of hydraulic cylinder piston with position sensor. From this data we can also calculate speed and power of the press. To test out the system we crush paper like playing cards and post it notes, bone and log.

We also give a update about bunker building project. We have ordered ar500 steel parts like roof and bullet proof door. We are now waiting the bullet proof windows and couple other parts to finish up the project.

John McLaughlin plays the blues, "Straight No Chaser"

258,440 views Premiered May 7, 2021
Guitar icon releases exclusive video to raise awareness and funds for COVID relief in India, asking all to donate to Project Hope ( today.

Musicians: John McLaughlin (Guitar); Roger Rossignol(Piano); Jean Michel. ‘Kiki’ Aublette (Drums,Bass); Nicolas Viccaro(Drums)

John McLaughlin has made no secret of the musical and spiritual debt that he owes to the people and nation of India. Whether explicit (the unprecedented east/west hybrid of his beloved Shakti ensemble) or implicit (the daring rhythmic structures he transmitted via the Mahavishnu Orchestra), India has exerted a gravitational pull on McLaughlin’s life and music, and his gratitude is boundless.

Now, with India suffocated by a rampant outbreak of the deadly COVID-19 virus, McLaughlin is asking all of us to give just a little in support of relief efforts there. To encourage donations to the global health and humanitarian organization Project Hope ( -- who are providing essential supplies and personnel to the Indian people -- he has gathered a group of musician friends in Monaco to film an exclusive version of Thelonious Monk’s immortal blues “Straight No Chaser.” This fiery performance is his humble offering, a beacon beaming out to his many admirers with the hope of helping to offset the terrible blight now ravaging India.

Captured live, the performance features McLaughlin alongside pianist Roger Rossignol, bassist JM Kiki Aublette, and drummer Nicolas Viccaro, investing Monk’s timeless, knotty blues theme with an innovative spark and infectious abandon. Through his improvisation, alternately soaring and keening, McLaughlin delights in paying tribute to Monk’s deft rhythmic constructions and cunning use of dissonance. His compatriots are with him at every step, resulting in a conversational, refreshingly human treatment of this timeless standard.

“The world is in dire straits,” McLaughlin implores, “but India is just catastrophic: People dying outside hospitals because there are no beds, there is no oxygen. We recorded this video thinking of them. It is our gift to you in the hope that you will make a small gift.”

Inside the community room at 1301

Friday, February 23, 2024

The great Marvin "Hannibal" Peterson

See these posts: Tell me about the blues: Trane,Ornette, Hannibal, Hannibal Lokumbe: "The most sacred place I've ever heard music performed in is a cotton field in Elgin, Texas," and this one over at 3QD: Tell Me About The Blues. From that last one:

I do believe that the worthies of the Pulitzer Prize committee should award some kind of prize to Hannibal. In my opinion he’s as interesting a composer as that other trumpet player. I mean no disrespect. Wynton Marsalis is a fine player, composer, and band leader. But he hasn’t led no army across no Alps on no elephants! I’m just sayin’. Yes, I understand you prized Ornette Coleman, Henry Threadgill, and Anthony Davis and you gave posthumous awards to Scott Joplin, Duke Ellington, Thelonius Monk, and John Coltrane. Thank you very much. But Hannibal ain’t dead, not yet. Don’t let his dreadlocks put you off. It’s only hair. Now’s the time. Get on it.

Keeping Score of my online action

It’s a lazy Friday afternoon, but then recently all Friday afternoons have seemed lazy. I’ve spent much the day doing this and that and taking stock, getting ready for the been weekend push on my next piece for 3 Quarks Daily. It’s currently titled, “Why We Need Philosophy, Now, More Than Ever [AI].” Perhaps it will have that title by the time I publish it. We’ll see.

Anyhow, I thought I’d report my latest states on Academic and here at the Savanna.


I’ve got 90,841 total views, which puts me in the top 0.5%. I’ve been there for months. At one point in the last year, Mar. 22, I was in the top 0.1%. That’s pretty good. I need to be in the top 0.001%.

The chart below shows my paper action over the last 60 days. The upper green line is paper views and the lower black line is downloads. Notice that all the data points except the rightmost one indicate totals for the day. I took the screenshot at 4PM; I won’t hazard a guess where thing will be at the end of the day.

Here's the most popular papers over the last 60 days (click on chart to embiggen):

Notice that the top paper, on the tech Singularity, is way behind the second paper, on Kawajiri’s Ninja Scroll, in total views (to the right), but ahead of it in downloads. The singularity paper is from 2014; the Ninja Scroll paper is from 2015. The corresponding Ninja Scroll post at New Savanna is the most popular post there. I’m particularly pleased with the performance of the Heart of Darkness paper, which is hard-core literary criticism, with quite a bit of close-reading and “distant” reading as well; it’s from 2019. The Visual Thinking paper is an encyclopedia article I published in 1990.

New Savanna

Here’s the views on New Savanna over the past three months. Very spiky.

Here’s the last year; notice the overall spikiness and the heavy action in August and September.

This is how things look over the whole life of the blog:

I have no idea what caused all that action in 2017, nor what’s the source of the recent, but highly variable, increase since August, 2023.

Here’s the most popular posts for the last three months (click on chart to embiggen):

Note particularly the most popular one: GOAT Literary Critics: Part 1.1, What do you mean, literary critic? That’s from December 4, 2023, and is the first post in my ongoing series on literary criticism. That series is a response to Tyler Cowen’s recently published book, GOAT: Who is the Greatest Economist of all Time and Why Does it Matter? Cowen published a link to my post at his blog, Marginal Revolution, on December 6 and that drove a lot of traffic to New Savanna that day.

The second post on the list is my post on Ninja Scroll; as I indicated above, it’s the most popular one at New Savanna and dates from September, 2010, the first year of the blog. In third place we have one the posts in my series about Miyazaki’s The Wind Rises. That is now the third most popular post on the blog and dates from late 2015.

Who’s the GOAT of Economics? Tyler Cowen on His New AI Book & More! [Bonus: from double-entry bookkeeping to supply and demand] – that’s in fourth place and has some comments on a podcast in which Cowen discussed his GOAT economists book. During the conversation he mentioned that he couldn’t understand why 17th century thinkers had such trouble conceptualizing the phenomenon of supply-and-demand, which he could easily explain to a reasonably intelligent teenager. I did some thinking about it and made a variety of comments about it, giving special attention to a speculation that the conceptualization probably piggy-backed on the practice of double-entry book-keeping. Cowen apparently liked that and gave me another link.

More later.