Sunday, April 30, 2023

Yellow tulip

Thoughts are an emergent property of brain activity. [working memory]

From the YouTube page:

Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition. Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument. In episode 160, Ole Jensen discussed his work in humans showing that low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics.

0:00 - Intro
6:22 - Evolution of Earl's thinking
14:58 - Role of the prefrontal cortex
25:21 - Spatial computing
32:51 - Homunculus problem
35:34 - Self
37:40 - Dimensionality and thought
46:13 - Reductionism
47:38 - Working memory and capacity
1:01:45 - Capacity as a principle
1:05:44 - Silent synapses
1:10:16 - Subspaces in dynamics

I was a bit surprised to hear so much discussion of a shift in emphasis from single-neuron recording to recording activity in many neurons. Back in 1969 Karl Pribram published an article in Scientific American arguing that neural representation was based on holographic principles, which necessarily draws our attention to the activity of populations of neurons. Yes, I read about the infamous "grandmother" cell, and I read lots of work reporting the results of single neuron recordings, but I hadn't realized how long that persisted as the dominant paradigm.

Saturday, April 29, 2023

Cherry blossoms

Teaching a humanoid robot to move around in the world is difficult and challenging

From the YouTube page:

Robert Playter is CEO of Boston Dynamics, a legendary robotics company that over 30 years has created some of the most elegant, dextrous, and simply amazing robots ever built, including the humanoid robot Atlas and the robot dog Spot.

This is a completely different world from large language models. It took 15 years for Boston Dynamics to get its Atlas robot to produce a natural looking walk. This discussion is worth viewing and thinking about. Figuring out how to get a robot to move is at least as intellectually challenging as getting an LLM to produce coherent and sensible prose. One might even argue that it is more challenging. At this point getting LLMs to produce coherent prose is not difficult. Multiple-column multiplication is difficult; eliminating confabulation is difficult; but mere prose production is not. But for some reason we don't know how to calibrate the difficulty of that behavior and so are prone to overvalue the significance of what the LLM is doing. But we are unlikely to view the movements of a humanoid robot and conclude that it's only a hop-skip-and-jump from playing a competent game of basketball.

On predictive control (c. 24:38):

Robert Playter: yeah those things have to run pretty quickly

Lex Fridman: what's the challenge of running things pretty quickly a thousand Hertz of acting and sensing quickly

RP: you know there's a few different layers of that you you want at the lowest level you like to run things typically at around a thousand Hertz which means that you know at each joint of the robot you're measuring position or force and then trying to control your actuator whether it's a hydraulic or electric motor trying to control the force coming out of that actuator and you want to do that really fast something like a thousand Hertz and that means you can't have too much calculation going on at that joint um but that's pretty manageable these days and it's fairly common

and then there's another layer that you're probably calculating you know maybe at 100 Hertz maybe 10 times slower which is now starting to look at the overall body motion and thinking about the the larger physics of of the uh of the robot

and then there's yet another loop that's probably happening a little bit slower which is where you start to bring you know your perception and your vision and things like that and so you need to run all of these Loops sort of simultaneously you do have to manage your your computer time so that you can squeeze in all the calculations you need in real time in a very consistent way

Tuesday, April 25, 2023

Ellie Pavlick: The Mind of a Language Model {good stuff!}

From the YouTube page:

Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.

0:00 - Intro
2:34 - Will LLMs make us dumb?
9:01 - Evolution of language
17:10 - Changing views on language
22:39 - Semantics, grounding, meaning
37:40 - LLMs, humans, and prediction
41:19 - How to evaluate LLMs
51:08 - Structure, semantics, and symbols in models
1:00:08 - Dimensionality
1:02:08 - Limitations of LLMs
1:07:47 - What do linguists think?
1:14:23 - What is language for?

Addendum, 4.26.23: Here's another video of Pavlick's work:

Addendum, 5.15.23: Here's yet another video of Pavlick's work:

Red tulips askew

Metaphysics and Large Language Models @ 3 Quarks Daily

I’ve got another article at 3 Quarks Daily:

World, Mind, Learnability, Large Language Models, and the Metaphysical Structure of the Cosmos

I take another crack at something I’ve looked at before, the idea that LLMs embody a metaphysics of the cosmos, whatever those are. We’re not dealing with the physical structure of the cosmos, we’re not dealing with the totality of things-as-such, but rather with the (conceptual) structures through which we apprehend them. Given the wide range of texts that have been digested and distilled into these models, their internal structure somehow is responsive to, reflects and refracts, those structures. 

Let me emphasize: Foundational LLMs implicate the metaphysical structure of the cosmos.

I set up the argument by starting with the 1977 short film, Powers of Ten, which traverses the cosmos from very large to very small in the compass of nine minutes. Then I ride one of my favorite hobby horses, Disney’s Fantasia, which also encompasses the cosmos. After that I ask: What must the world be like so that we can learn its structure?

And so forth and so on.

* * * * *

In the course of working on this piece I decided to ask GPT-4 about the film, Powers of Ten. Of course it knew about the film. I then asked to summarize what that film shows. It did so and with great confidence. And it got it wrong. First, I present the dialog I had with GPT-4, and then I present a summary of the film from Wikipedia. That summary, cursory though it is, makes it clear that GPT-4’s summary is incorrect. The Chatster was up to its old tricks, just making stuff up. I didn’t make any effort to nail down all the errors in GPT-4’s summary.

* * * * *

[April 22, 2023 – 3:15PM – Mar 23 version]

Do you know about the short 1977 film “Powers of Ten”?

Yes, I am familiar with the short film "Powers of Ten." It was directed by Ray and Charles Eames and was released in 1977. The film explores the relative sizes of things in the universe, from the scale of the observable universe all the way down to the subatomic level.

The film starts with an aerial view of a man and woman having a picnic in a park in Chicago, and then zooms out to show the Earth from a distance of ten meters, then ten times farther, and so on, until it reaches the limit of the observable universe. It then zooms back in, but this time at a smaller and smaller scale, revealing the structure of atoms and subatomic particles.

The film is a classic of science education and has been widely praised for its innovative use of scale and its ability to convey complex ideas in a clear and accessible way. It has been used in classrooms and science museums around the world, and is often cited as an example of effective science communication.

Could you talk your way through the first part of the film, starting with the couple and zooming out? What do we see at each higher power of 10?

Sure, I'd be happy to describe the first part of the film "Powers of Ten" to you.

The film begins with a shot of a couple, a man and a woman, having a picnic on a sunny day in a park in Chicago. The camera then begins to zoom out from the couple, at a rate of 10 times the distance every 10 seconds. As the camera zooms out, we see the park getting smaller and smaller, until it disappears from view.

At the first power of ten (10^1), we see the view of the park from an altitude of 10 meters above the couple. We can see the couple clearly, as well as the surrounding park and nearby buildings.

At the second power of ten (10^2), the camera has zoomed out to an altitude of 100 meters. Now we can see not just the park, but also the surrounding neighborhoods and the city of Chicago in the distance.

At the third power of ten (10^3), the camera has zoomed out to an altitude of 1,000 meters, or one kilometer. Now we can see not just the city of Chicago, but also the entire Lake Michigan shoreline and beyond.

At the fourth power of ten (10^4), the camera has zoomed out to an altitude of 10,000 meters, or 10 kilometers. Now we can see not just the city of Chicago and Lake Michigan, but also the entire state of Illinois and neighboring states.

At the fifth power of ten (10^5), the camera has zoomed out to an altitude of 100,000 meters, or 100 kilometers. Now we can see not just the Midwest region of the United States, but also the entire continent of North America.

At the sixth power of ten (10^6), the camera has zoomed out to an altitude of 1,000,000 meters, or 1,000 kilometers. Now we can see not just North America, but also the entire planet Earth.

At the seventh power of ten (10^7), the camera has zoomed out to an altitude of 10,000,000 meters, or 10,000 kilometers. Now we can see not just the Earth, but also the entire solar system, with the planets orbiting the sun.

At the eighth power of ten (10^8), the camera has zoomed out to an altitude of 100,000,000 meters, or 100,000 kilometers. Now we can see not just the solar system, but also the entire Milky Way galaxy, with its hundreds of billions of stars and countless other celestial objects.

At the ninth power of ten (10^9), the camera has zoomed out to an altitude of 1,000,000,000 meters, or 1 billion kilometers. Now we can see not just the Milky Way galaxy, but also the entire Local Group of galaxies, a cluster of more than 50 galaxies that includes the Milky Way.

At the tenth power of ten (10^10), the camera has zoomed out to an altitude of 10,000,000,000 meters, or 10 billion kilometers. Now we can see not just the Local Group of galaxies, but also the entire observable universe, with its hundreds of billions of galaxies and vast expanses of empty space.

That's a brief overview of the first part of "Powers of Ten." The film is an excellent way to explore the vastness of the universe and the different scales at which we can observe it.

From the Poetics: Beginning, middle, end

Monday, April 24, 2023

How Not To Destroy the World With AI - Stuart Russell

From the YouTube page:

About Talk:

It is reasonable to expect that artificial intelligence (AI) capabilities will eventually exceed those of humans across a range of real-world decision-making scenarios. Should this be a cause for concern, as Alan Turing and others have suggested? Will we lose control over our future? Or will AI complement and augment human intelligence in beneficial ways? It turns out that both views are correct, but they are talking about completely different forms of AI. To achieve the positive outcome, a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. Russell will argue that this is possible as well as necessary. The new approach to AI opens up many avenues for research and brings into sharp focus several questions at the foundations of moral philosophy.

About Speaker:

Stuart Russell, OBE, is a professor of computer science at the University of California, Berkeley, and an honorary fellow of Wadham College at the University of Oxford. He is a leading researcher in artificial intelligence and the author, with Peter Norvig, of “Artificial Intelligence: A Modern Approach,” the standard text in the field. He has been active in arms control for nuclear and autonomous weapons. His latest book, “Human Compatible,” addresses the long-term impact of AI on humanity.

How do we get the machine to assist humans? (c. 36:26):

So we need to actually to get rid of the standard model. So we need a different model, right? This is the standard model. Machines are intelligent to the extent their actions can be expected to achieve their objectives.

Instead, we need the machines to be beneficial to us, right? We don't want this sort of pure intelligence that once it has the objective is off doing its thing, right? We want the systems to be beneficial, meaning that their actions can be expected to achieve our objectives.

And how do we do that? [...] That you do not build in a fixed known objective upfront. Instead, the machine knows that it doesn't know what the objective is, but it still needs a way of grounding its choices over the long run.

And the evidence about human preferences will say flows from human behavior. [...] So we call this an assistance game. So it's a, involves at least one person, at least one machine, and the machine is designed to be of assistance to the human. [...] The key point is there's a priori uncertainty about what those utility functions are. So it's gotta optimize something, but it doesn't know what it is.

And during, you know, if you solve the game, you in principle, you can just solve these games offline and then look at the solution and how it behaves. And as the solution unfolds effectively, information about the human utilities is flowing at runtime based on the human actions. And the humans can do deliberate actions to try to convey information, and that's part of the solution of the game. They can give commands, they can prohibit you from doing things, they can reward you for doing the right thing. [...]

So in some sense, you know, the entire record, the written record of humanity is, is a record of humans doing things and other people being upset about it, right? All of that information is useful for understanding what human preference structures really are algorithmically.

Yeah, we, you know, we can solve these and in fact, the, the one machine, one human game can be reduced to a partially observable MDP.

And for small versions of that we can solve it exactly. And actually look at the equilibrium of the game and, and how the agents behave. But an important point here and, the word alignment often is used in, in discussing these kinds of things.

And as Ken mentioned, it's related to inverse reinforcement learning, the learning of human preference structures by observing behavior. But alignment gives you this idea that we're gonna align the machine and the human and then off they go, right? That's never going to happen in practice.

The machines are always going to have a considerable uncertainty about human preference structures, right? Partly because there are just whole areas of the universe where there's no experience and no evidence from human behavior about how we would behave or how we would choose in those circumstances. And of course, you know, we don't know our own preferences in those areas. [...]

So when you look at these solutions, how does the robot behave? If it's playing this game, it actually defers to human requests and commands. It behaves cautiously because it doesn't wanna mess with parts of the world where it's not sure about your preferences. In the extreme case, it's willing to be switched off.

So I'm gonna have, in the interest of time, I'm gonna have to skip over the proof of that, which is prove with a little, a little game. But basically we can show very straightforwardly that as long as the robot is uncertain about how the human is going to choose, then it has a positive incentive to allow itself to be switched off, right? It gains information by leaving that choice available for the human. And it only closes off that choice when it has, well, or at least when it believes it has perfect knowledge of human preferences.

Indexical goals (51:33):

One might initially think, well, you know what they're doing. If they're learning to imitate humans, then, then maybe actually, you know, almost coincidentally that will end up with them being aligned with what humans want. All right? So perhaps we accidentally are solving the alignment problem here, by the way we're training these systems. And the answer to that is it depends. It depends on the type of goal that gets learned.

And I'll distinguish two types of goals. There's what we call common goals where things like painting the wool or mitigating climate change where if you do it, I'm happy if I do it, you are happy, we're all happy, right? These are goals where any agent doing these things would make all the agents happy.

Then there are indexical goals, which are meaning indexical to the individual who has the goal. So drinking coffee, right? I'm not happy if the robot drinks the coffee, right? What I want to have happen is if I'm drinking coffee and the robot does some inverse reinforcement, Hey, Stuart likes coffee, I'll make Stuart a cup of coffee in the morning. The robot drinking a coffee is not the same, right?

So this is what we mean by an indexable goal and becoming ruler of the universe, right? Is not the same if it's me versus the robot. Okay? And obviously if systems are learning indexical goals, that's arbitrarily bad as they get more and more capable, okay? And unfortunately, humans have a lot of indexical goals. We do not want AI systems to learn from humans in this way.

Imitation learning is not alignment.

Pink tulip

Michael Jordan: How AI Fails Us, and How Economics Can Help

Jordan argues that AI, which is mostly machine learning these days, remains dominated by the Dr. Frankenstein notion of creating an artificial human. He regards that as a mistake, and argues for a more collective approach. (Cf. this post from two years ago, Beyond "AI" – toward a new engineering discipline.)

From the YouTube page:

Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single agent, and in which agents should be autonomous so they can exhibit intelligence independent of human intelligence. Thus, when AI systems are deployed in social contexts, the overall design is often naive. Such a paradigm need not be dominant. In a broader framing, agents are active and cooperative, and they wish to obtain value from participation in learning-based systems. Agents may supply data and resources to the system, only if it is in their interest. Critically, intelligence inheres as much in the system as it does in individual agents. This perspective is familiar to economics researchers, and a first goal in this work is to bring economics into contact with computer science and statistics. The long-term goal is to provide a broader conceptual foundation for emerging real-world AI systems, and to upend received wisdom in the computational, economic and inferential disciplines.

Michael I. Jordan is the Pehong Chen Distinguished Professor in the departments of electrical engineering and computer science and of statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Jordan is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences, and a foreign member of the Royal Society. He was a plenary lecturer at the International Congress of Mathematicians in 2018. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize from the Cognitive Science Society in 2015 and the ACM/AAAI Allen Newell Award in 2009.

Two slides from the presentation:

Here’s a paper where Jordan is one of the authors:

By Divya Siddarth, Daron Acemoglu, Danielle Allen, Kate Crawford, James Evans, Michael Jordan, E. Glen Weyl, How AI Fails Us, Edmond J. Safra Center for Ethics and Carr Center for Human Rights Policy, Harvard University (December 1, 2021).

Abstract:

The dominant vision of artificial intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. This “actually existing AI” vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous, optimizing for artificial metrics of human replication rather than for systemic augmentation, and tending to concentrate power, resources, and decision-making in an engineering elite.  Alternative visions based on participating in and augmenting human creativity and cooperation have a long history and underlie many celebrated digital technologies such as personal computers and the internet.  Researchers and funders should redirect focus from centralized autonomous general intelligence to a plurality of established and emerging approaches that extend cooperative and augmentative traditions as seen in successes such as Taiwan’s digital democracy project to collective intelligence platforms like Wikipedia. We conclude with a concrete set of recommendations and a survey of alternative traditions.

Sunday, April 23, 2023

Sign, white on black

Another crazy/interesting video about AGI and the future [Goertzel]

I've been aware of Ben Goertzel since, I believe, the 1980s or 1990s, when I read and article he published in The Journal of Social and Evolutionary Systems, a journal where David Hays and I published regularly. He's a researcher in artificial intelligence who thinks that his team may well produce AGI and that AGI isn't all that far in the future.

He's working entirely outside the world of Big AI in the USofA, Google, Microsoft, OpenAI, Facebook, etc. He's skeptical about the larger claims being made about GPT systems, a skepticism I feel is warranted, but sees them as nonetheless interesting and useful. I agree with that as well.

It's interesting comparing his remarks with those of John Veraeke (immediately below). At one point while listening to him I brought myself to the edge of thinking about actually creating or encountering an artificial mind that warrants moral standing and rights. That's an uncanny feeling. We're getting closer to that. How much closer....Who knows.

From the YouTube page:

Today we’re joined by Ben Goertzel, CEO of SingularityNET. In our conversation with Ben, we explore all things AGI, including the potential scenarios that could arise with the advent of AGI and his preference for a decentralized rollout comparable to the internet or Linux. Ben shares his research in bridging neural nets, symbolic logic engines, and evolutionary programming engines to develop a common mathematical framework for AI paradigms. We also discuss the limitations of Large Language Models (LLMs) and the potential of hybridizing LLMs with other AGI approaches. Additionally, we chat about their work using LLMs for music generation and the limitations of formalizing creativity. Finally, Ben discusses his team's work with the OpenCog Hyperon framework and Simuli to achieve AGI, and the potential implications of their research in the future.

Chapters:

05:16 - AGI and Sentience
08:52 - Current Large Language Models and the Path to AGI
17:33 - Large Language Models Limited in Theory of Mind
22:07 - Exploring the Potential of Music LM Models
27:06 - AGI potential paths: Neuroscience vs. Mathematical Fusion
31:29 - OpenCog Hyperon: Rebuilding AI Infrastructure at Scale
35:44 - Advancing Towards Truth GPT and AGI Breakthrough
42:33 - The Complexities Behind Sophia's Dialogue Responses
53:08 - General Intelligence and Agency: Tightly Linked?
57:58 - The Implications of AGI Breakthrough: Decentralization Needed

An interesting video about the implications of current AI (GPT) [Vervaeke]

I'm only thirty minutes into the this, but I'm posting it, though I've got doubts. The doubts? The language and the framing, including the fact that Vervaeke declares himself to be an "Internationally Acclaimed Cognitive Scientist." Maybe he is, maybe he isn't, but a quick web search leaves me skeptical. In any event, that's the kind of appellation best left in the hands of third parties. To use it yourself is, at best, tacky.

And then we have those moments when Vervaeke lapses into the third person when referring to himself. What?

Still, Vervaeke does recognize the power and potential of the technology while at the same time seeing that it presents a deep challenge to our sense of who and what we are and that that challenge itself is a source of confusion and danger leading to a mis-evaluation and misuse of the technology. That's worth listening to.

Quick notes 

1) Too much talk of autopoesis and emergence for my taste. I tend to think of those as stand-ins for things we don't understand.

2) Makes an interesting distinction between intelligence and rationality. Intelligence seems to be a capacity while rationality is learned and can be developed. 

3) Vervaeke and his cohorts are worried that the pornography industry and the military will be the primary drivers of embodied AI. 

4) In a complex world, trade-offs are inevitable. AIs cannot avoid them. Moreover, internal coherence will be an issue.

5) "Reason is about how we bind ourselves to ourselves and to each other so we can be bound to the world."

6) "Don't try and code into them rules and values. We need to be able at some point to answer this question in deep humility and deep truth: What would it be for these machines to flourish for themselves?"

7) "I think the theological response is ultimately what is needed here." [What do I think about this? Hmmmm.]

8) Compare Vervaeke's remarks with those of Ben Goertzel.

* * * * *

All the text below is taken from the YouTube page:

AI: The Coming Thresholds and The Path We Must Take | Internationally Acclaimed Cognitive Scientist

Dr. John Vervaeke lays out a multifaceted argument discussing the potential uses, thresholds, and calamities that may occur due to the increase in artificial intelligence systems. While there is a lot of mention of GPT and other chatbots, this argument is meant to be seen as confronting the principles of AI, AGI, and any other forms of Artificial Intelligence.

First, Dr. Vervaeke lays out an overview of his argument while also contextualizing the conversation. Dr. Vervaeke then explores the scientific ramifications and potentialities. Lastly, Dr. Vervaeke concludes in the philosophical realm and ends the argument with a strong and stern message that we face a kairos, potentially the greatest that the world has ever seen.

Dr. Vervaeke is also joined in this video essay by Ryan Barton, the Executive Director of the Vervaeke Foundation, as well as Eric Foster, the Media Director at the Vervaeke Foundation. 

* * * * *

Addendum: 5.25.23: Further thoughts about the moral challenges posed by AI.

Orthogonal views in Hoboken

Are we poisoning ourselves with microplastics?

That's what Mark O'Connel argues in a recent op-ed in the NYTimes, Our Way of Life Is Poisoning Us (April 20, 2023). The article opens:

There is plastic in our bodies; it’s in our lungs and in our bowels and in the blood that pulses through us. We can’t see it, and we can’t feel it, but it is there. It is there in the water we drink and the food we eat, and even in the air that we breathe. We don’t know, yet, what it’s doing to us, because we have only quite recently become aware of its presence; but since we have learned of it, it has become a source of profound and multifarious cultural anxiety.

Maybe it’s nothing; maybe it’s fine. Maybe this jumble of fragments — bits of water bottles, tires, polystyrene packaging, microbeads from cosmetics — is washing through us and causing no particular harm. But even if that was true, there would still remain the psychological impact of the knowledge that there is plastic in our flesh. This knowledge registers, in some vague way, as apocalyptic; it has the feel of a backhanded divine vengeance, sly and poetically appropriate. Maybe this has been our fate all along, to achieve final communion with our own garbage.

The word we use, when we speak about this unsettling presence within us, is “microplastics.”

Microplastics is causing in fish and among seabirds; they've have been found on Mt. Everest and in the Marianna Trench, and in the breast milk of new mothers in Italy:

To consider this reality is to glimpse a broader truth that our civilization, our way of life, is poisoning us. There is a strange psychic logic at work here; in filling the oceans with the plastic detritus of our purchases, in carelessly disposing of the evidence of our own inexhaustible consumer desires, we have been engaging in something like a process of repression. And, as Freud insisted, the elements of experience that we repress — memories, impressions, fantasies — remain “virtually immortal; after the passage of decades they behave as though they had just occurred.” This psychic material, “unalterable by time,” was fated to return, and to work its poison on our lives.

Is this not what is going on with microplastics? The whole point of plastic, after all, is that it’s virtually immortal. From the moment it became a feature of mass-produced consumer products, between the First and Second World Wars, its success as a material has always been inextricable from the ease with which it can be created, and from its extreme durability. What’s most useful about it is precisely what makes it such a problem. And we keep making more of the stuff, year after year, decade after decade. Consider this fact: Of all the plastic created, since mass production began, more than half of it has been produced since 2000. We can throw it away, we can fool ourselves into thinking we’re “recycling” it, but it will not absent itself. It will show up again, in the food we eat and the water we drink. It will haunt the milk that infants suckle from their mothers’ breasts. Like a repressed memory, it remains, unalterable by time.

Writing in the 1950s, as mass-produced plastic was coming to define material culture in the West, the French philosopher Roland Barthes saw the advent of this “magical” stuff effecting a shift in our relationship to nature. “The hierarchy of substances,” he wrote, “is abolished: a single one replaces them all: the whole world can be plasticized, and even life itself since, we are told, they are beginning to make plastic aortas.”

To pay attention to our surroundings is to become aware of how right Barthes was. As I type these words, my fingertips are pressing down on the plastic keys of my laptop; the seat I’m sitting on is cushioned with some kind of faux-leather-effect polymer; even the gentle ambient music I’m listening to as I write is being pumped directly to my cochleas by way of plastic Bluetooth earphones. These things may not be a particularly serious immediate source of microplastics. But some time after they reach the end of their usefulness, you and I may wind up consuming them as tiny fragments in the water supply. In the ocean, polymers contained in paint are the largest source of these particles, while on land, dust from tires, and tiny plastic fibers from things like carpets and clothing, are among the main contributors.

And on and on, though an anecdote about Joe Rogan's worries, to pervasive uncertainty and anxiety:

And the aura of scientific indeterminacy that surrounds the subject — maybe this stuff is causing unimaginable damage to our bodies and minds; then again, maybe it’s fine — lends it a slightly hysterical cast. We don’t know what these plastics are doing to us, and so there is no end to the maladies we might plausibly ascribe to them. Maybe it’s microplastics that are making you depressed. Maybe it’s because of microplastics that you have had a head cold constantly since Christmas. Maybe it’s microplastics that are stopping you and your partner from conceiving, or making you lazy and lethargic, or forgetful beyond your years. Maybe it’s microplastics that caused the cancer in your stomach, or your brain.

There's more at the link.

Friday, April 21, 2023

Is AI Doom as an attempt to hijack reality?

For some reason I can’t stop thinking about the phenomenon of belief in AI Doom. It’s not that I fear it, not at all. While AI certainly presents challenges, even independently of its misuse by bad actors (individuals, corporations, governments, gangs), I see no reason to believe that Rogue AIs are going to paperclip us, disassemble us with nanobots, or, for that matters use us as a source of energy (as in The Matrix). It is utterly implausible.

What stumps me is why so many otherwise intelligent people have bought into it.

At the moment I’m thinking of it as a desperate attempt to “freeze” reality. It gives you a closed world in which to think. This BIG BAD THING of the future is going to destroy us. So, it is our responsibility to plan against that thing. Nothing else matters. The world is closed. Robin Hanson has argued Most AI Fear Is Future Fear. Think of this as a variation on that argument.

Two further thoughts: 1) When I was an undergraduate I read a fair amount of primate ethology. One thing I read was that when an adult baboon wanted to keep a young one from wandering away from the troop, they’d go up to it and give it a sharp blow. The youngster would then move to and cling to them as a source of safety. Think of the assertion “WE’RE DOOMED” as the equivalent of that sharp blow. We recoil in fear and move toward the people who are saying that and so come under the spell of their worldview.

2) After Donald Trump was inaugurated he insisted the crowd was the largest ever, even in the face of photographic evidence to the contrary. He was, in effect, making the point that he gets to determine the nature of reality, not mere “facts.” And the willingness of his minions to promulgate this view was a test of their loyalty and so was a way to show their fealty to them. AI Doom strikes me as a similar attempt to hijack reality, though in this case the Doomers have had to hijack themselves before they can attempt to hijack the rest of us.

Thursday, April 20, 2023

Pink cherry blossoms

"The Fox settlement reeks of justice without accountability"

David French, Did Fox News Just Pay for the Privilege of Continued Corruption? NYTimes, April 20 2023.

The legal system can achieve justice when an aggrieved party is made whole. And make no mistake, Dominion received justice. It was more than made whole for Fox’s lies, and its quest for even more justice continues. Its lawsuits against OAN, Newsmax, Sidney Powell, Rudy Giuliani and Mike Lindell are still pending.

But accountability is different. Accountability occurs when the people responsible for misconduct — and not merely their corporate bank accounts — experience proportionate consequences for their actions. One of the #MeToo movement’s greatest achievements was exposing to the world the degree to which corporations essentially paid for the privilege of continued corruption. They’d write checks to the survivors of abuse (granting them justice) without taking action against the abusers (enabling them to avoid accountability).

This is not a critique of the plaintiffs at all. They need justice, and they don’t have the power to impose accountability. They can’t mandate that corporations apologize or terminate employees without the agreement of the corporation. The system itself can generally only give them money. Do we want to ask people who’ve been harmed by misconduct to delay or risk their own quest for justice for the sake of using the settlement process to mandate apologies or terminations that the courts don’t have the power to compel?

the cost of doing business:

The end result is that Fox has paid an immense price for its lies, but it recognizes that its true vulnerability isn’t in its bank account but in its audience. It can absorb huge financial losses so long as those losses are fleeting. It cannot prosper if it loses its audience. Shielding its audience from the truth is easily worth almost $800 million to a company that made $1.2 billion in net income last year and is sitting on $4 billion in cash reserves.

In the meantime, many of the viewers who keep the company so very profitable won’t know anything meaningful about the Dominion settlement or Fox’s lies — because Fox won’t tell them. I can think of any number of friends, relatives and neighbors who regularly consume conservative media and know nothing about the case. They know nothing about Fox’s falsehoods. Their ignorance is of incalculable worth to Fox.

Wednesday, April 19, 2023

Bright red tulip

キスメット [kisumetto] – Some Japanese connections in the internet age

When I visited my sister over the most recent Christmas holiday, she gave me a book, Lost Japan: Last Glimpse of Beautiful Japan, by Alex Kerr. As the title indicates, the book is about traditional Japanese culture. Kerr devotes a chapter or two to restoring a traditional Japanese house that he'd bought in 1973 for $1800.

On April 17 The New York Times had a story about abandoned Japanese houses, many in rural areas, which are quite common in Japan for a variety of reasons. It starts with the story of Jaya Thursfield, who had moved to Japan with his Japanese wife, Chihiro (yes, same name as the girl in Spirited Away). They bought one of these abandoned houses (about 45 minutes from Tokyo) and restored it.

Then, wouldn't you know it, apparently one of YouTube's resident AIs knew I'd read that article and served up the first in a series of videos that Thursfield had made about restoring that house. Just now it presented me with one of Thrusfield's videos entitled, "Last Glimpse of Japan’s Beautiful Old Houses? (ft. Alex Kerr)."

That's right, that Alex Kerr. Thursfield is interviewing Kerr in that house he bought a half century ago.

Bonus: For awhile Kerr worked with Trammel Crow, the father of a Yale classmate of his, Trammel S. Crow. Harlan Crow, Justice Clarence Thomas's friend and benefactor, is a brother to Trammel S., and is chairman of the company created by the elder Trammel Crow (who died in 2009).

Tuesday, April 18, 2023

This is crazy talk about AI risk

Is AI an Existential Threat? A Debate. | Robert Wright, Roko Mijic, and Alexander Campbell

From the program notes:

0:00 How Roko and Alex wound up debating on the Nonzero Podcast
5:05 Could near-term AI disruption be worse than Covid?
13:18 Is AI an existential threat?
25:31 How the sci-fi apocalypse scenario might unfold
36:42 How is a renegade AI like a man wearing a condom?
47:06 Why would an AI want to kill us?
55:47 How the Cold War makes AI more dangerous
1:09:49 Roko and Alex make final statements before Overtime

Robert Wright is an interesting guy. I've been following his podcasts since the early days at Blogging Heads. Before that I read him in The New Republic and I read his book, NonZero. But he was in over his head trying to host this debate.

[And yes, it's that Roko.]

Backing iin

Plutocracy in America, liminality in the world

Two opinion pieces in the NYTimes caught my attention. One is about the paradox of plutocracy in America the other is about the future.

Krugman on plutocrats

He opens:

The rich are different from you and me: They have immensely more power. But when they try to exercise that power they can trap themselves — supporting politicians who will, if they can, create a society the rich themselves wouldn’t want to live in.

This, I’d argue, is the common theme running through four major stories that have been playing out over the past few months. They are: the relationship between Justice Clarence Thomas and the billionaire Harlan Crow; the rise and seeming decline of Ron DeSantis’s presidential campaign; the trials (literally) of Fox News; and the Muskopalypse at Twitter.

Whoops!

... however, there’s only so much you can achieve in America, imperfect and gerrymandered as our democracy may be, unless you can win over large numbers of voters who don’t support a pro-billionaire economic agenda.

It’s a simplification, but I think fundamentally true, to say that the U.S. right has won many elections, despite an inherently unpopular economic agenda, by appealing to intolerance — racism, homophobia and these days anti-“wokeness.” Yet there’s a risk in that strategy: Plutocrats who imagine that the forces of intolerance are working for them can wake up and discover that it’s the other way around.

Endgame:

I still believe that the concentration of wealth at the top is undermining democracy. But it isn’t a simple story of plutocratic rule. It is, instead, a story in which the attempts of the superrich to get what they want have unleashed forces that may destroy America as we know it. And it’s terrifying.

And the future?

Jerome Roos begins by observing, “Humanity now faces a confluence of challenges unlike any other in its history.” In consequence

we are presented with two familiar but very different visions of the future: a doomsday narrative, which sees apocalypse everywhere, and a progress narrative, which maintains that this is the best of all possible worlds. Both views are equally forceful in their claims — and equally misleading in their analysis. The truth is that none of us can really know where things are headed. The crisis of our times has blown the future right open.

After sketching the two views he says “No!”

To truly grasp the complex nature of our current time, we need first of all to embrace its most terrifying aspect: its fundamental open-endedness. It is precisely this radical uncertainty — not knowing where we are and what lies ahead — that gives rise to such existential anxiety.

Anthropologists have a name for this disturbing type of experience: liminality. It sounds technical, but it captures an essential aspect of the human condition. Derived from the Latin word for threshold, liminality originally referred to the sense of disorientation that arises during a rite of passage. In a traditional coming-of-age ritual, for instance, it marks the point at which the adolescent is no longer considered a child but is not yet recognized as an adult — betwixt and between, neither here nor there. Ask any teenager: Such a state of suspension can be a very disconcerting time to live through.

We are ourselves in the midst of a painful transition, a sort of interregnum, as the Italian political theorist Antonio Gramsci famously called it, between an old world that is dying and a new one that is struggling to be born. Such epochal shifts are inevitably fraught with danger. Yet for all their destructive potential, they are also full of possibility.

Roos suggests that, rather than “conceive of history as a straight line tending either up toward gradual improvement or down toward an inevitable collapse” we should see it as an alternation of period of calm and upheaval.

Progress and catastrophe, those binary opposites, are really joined at the hip. Together, they engage in an endless dance of creative destruction, forever breaking new ground and spiraling out into the unknown.

Somehow I’m not reassured by this. Not, mind you, that I’m looking for either a progress or a catastrophe narrative. But this proposal seems to me mostly superficial window dressing, rather like the film I noted the other day, Charlie Wilson’s War.

Meanwhile I’m thinking about Krugman’s plutocrats busy undermining the high ground on which they build their castles.

A five-year-old's response to AI x-risk {dum de dum-dum doom!]

Coffee, bacon, waffle, condiments

GPT-4 on doomsday cults

On April 17 I queried GPT-4 (March 23 version) about doomsday cults. It’s not that I have a particular interest in doomsday cults, but that I think that current concerns about existential risks posed by artificial intelligence seem to participate in – what do we call it? a psycho-social-cultural formation? – an ethos that derives as much from cult behavior as from a coherent and rational evaluation of A.I. technology.

I start out with a general question about doomsday cults. Then I mention two contemporary examples, The People’s Temple (Jim Jones) and the Branch Davidians. I then ask it to name some more examples, which it does, and then: “How far back in history do we have records of doomsday cults?” Finally I ask it about belief AI x-risk. Here’s the second paragraph from its three paragraph answer:

However, the belief that AI will inevitably destroy all humans and that there is no way to prevent it can be seen as an extreme and apocalyptic view. If a group of people were to organize around this belief and engage in behaviors such as isolating themselves from society or advocating for violent actions to prevent AI from becoming too powerful, then they could potentially be classified as a doomsday cult.

That second sentence is worth serious thought. That hasn’t happened yet. But we’ve only been living with general access to chatbots for four-and-a-half months. What’s going to happen in the future?

I don’t know.

* * * * *

What can you tell me about doomsday cults?

Doomsday cults are groups of people who believe in an impending apocalypse or catastrophic event that will result in the end of the world as we know it. These groups often develop unique and extreme beliefs about the nature of the apocalypse, including specific dates, scenarios, or supernatural explanations for the end of the world.

Doomsday cults may also develop highly structured and controlling social hierarchies, with a charismatic leader or group of leaders who hold significant power over the members. This can lead to the exploitation of followers and the use of extreme tactics to maintain the group's cohesion and loyalty.

In some cases, doomsday cults have been known to engage in violent or criminal behavior, including mass suicide or murder-suicide. Examples of infamous doomsday cults include the People's Temple led by Jim Jones and the Branch Davidians led by David Koresh.

It's important to note that not all groups with apocalyptic beliefs are dangerous or harmful, and many people may hold apocalyptic beliefs without being part of a formal cult or group. However, when a group becomes highly controlling, manipulative, or violent, it can be a cause for concern.

Can you tell me more about the People's Temple and the Branch Davidians.

Certainly. The People's Temple was a religious group led by Jim Jones that was founded in the 1950s in Indianapolis, Indiana. The group started as a progressive and inclusive Christian church, but over time, Jones became increasingly paranoid and controlling. In 1977, Jones moved the group to Guyana in South America, where they established the Jonestown settlement.

On November 18, 1978, a delegation from the United States arrived at Jonestown to investigate reports of human rights abuses in the community. Jones ordered his followers to kill the delegation and commit mass suicide by drinking cyanide-laced punch. In total, 918 people died, including over 300 children. The Jonestown massacre remains one of the largest mass suicides in history and is widely considered to be a case of a destructive cult.

The Branch Davidians were a religious group that originated as a splinter group from the Seventh-day Adventist Church in the 1950s. The group was led by David Koresh, who proclaimed himself to be a messiah and claimed to receive divine revelations. In 1993, the group came into conflict with law enforcement officials when the Bureau of Alcohol, Tobacco, and Firearms (ATF) attempted to raid their compound in Waco, Texas, on suspicion of illegal firearms possession.

After a prolonged standoff, the compound was raided by the FBI, and a fire broke out that killed Koresh and 76 of his followers, including many women and children. The incident was widely criticized for the excessive use of force by law enforcement and sparked debates about the rights of religious groups and the role of government in regulating them. The Branch Davidians are also considered to be a destructive cult due to the extreme beliefs and controlling behavior of their leader.

Can you name some more examples?

Sure. Here are some additional examples of doomsday cults:

  1. Heaven's Gate: A UFO religious group led by Marshall Applewhite and Bonnie Nettles. In 1997, the group committed mass suicide in order to reach an alien spacecraft they believed was following the Hale-Bopp comet.
  2. Aum Shinrikyo: A Japanese cult led by Shoko Asahara that blended elements of Buddhism, Hinduism, and Christianity. In 1995, the group carried out a sarin gas attack on the Tokyo subway system that killed 13 people and injured thousands.
  3. Order of the Solar Temple: A secret society that blended elements of Christianity and New Age beliefs. The group was responsible for a series of mass suicides in Switzerland and Canada in the 1990s.
  4. Movement for the Restoration of the Ten Commandments of God: A Ugandan doomsday cult that believed the world would end on December 31, 1999. When the predicted apocalypse did not occur, the group's leaders organized a mass murder-suicide that killed over 1,000 people.

It's important to note that while these groups may share some common characteristics, each one is unique in its beliefs, practices, and history. Additionally, not all groups with apocalyptic beliefs are dangerous or harmful, and many people may hold apocalyptic beliefs without being part of a formal cult or group.

How far back in history do we have records of doomsday cults?

The idea of an impending apocalypse or end of the world has been present in many cultures and religions throughout history, so it is difficult to pinpoint the origins of doomsday cults. However, some examples of historical groups that have been described as doomsday cults include:

  1. The Montanists: A Christian movement in the 2nd century that believed the end of the world was imminent and that the Holy Spirit would soon descend upon the earth.
  2. The Millerites: A 19th-century American religious group led by William Miller that believed the Second Coming of Christ was imminent and predicted that it would occur on October 22, 1844. When this prediction did not come to pass, the group experienced what is known as the Great Disappointment.
  3. The Cargo Cults: A series of Melanesian religious movements in the 19th and 20th centuries that emerged in response to the arrival of Western goods and technology. Some of these movements prophesized that the arrival of Western ships and planes signaled the coming of a messiah who would bring abundance and prosperity.

While these groups may not fit the modern definition of a doomsday cult, they share some similarities in their beliefs about an impending cataclysmic event and the need for preparation or repentance.

These days there are people who believe that advanced artificial intelligence will destroy all humans. Some of these people believe that this could happen in less than 50 years. Could these people be considered to be doomsday cultists?

It is important to note that not all people who believe in the potential risks of advanced artificial intelligence can be considered doomsday cultists. Concerns about the potential negative impact of AI on society and humanity are legitimate and have been discussed by many experts in the field.

However, the belief that AI will inevitably destroy all humans and that there is no way to prevent it can be seen as an extreme and apocalyptic view. If a group of people were to organize around this belief and engage in behaviors such as isolating themselves from society or advocating for violent actions to prevent AI from becoming too powerful, then they could potentially be classified as a doomsday cult.

It is worth noting that many experts in the field of AI do not share the belief that AI will necessarily be harmful to humanity. Some argue that the development of advanced AI could be a positive force for progress and that the risks can be mitigated through careful research and development, as well as through ethical frameworks and regulations.

Monday, April 17, 2023

Daffodils

A strange and interesting podcast on AI, AI Doom, and the future

I've seen one prior episode of this podcast, Bankless, and episode in which Eliezer Yudkowsky scared the daylights out of the hosts when he laid out his standard story of AI doom, something they obviously weren't expecting. These guys know about crypto, but not about AI, so they got blindsided. In this episode they talk with Robin Hanson, who patiently pours water on Yudkowsky's version of the future with AI.

Hanson is quite familiar with Yudkowsky's thinking, having been a blogger partner of his a decade ago and having debated with him many times. Hanson has his own strange vision of the future – in which we upload out minds to machines and these machines proliferate, and then there's his "grabby" aliens stuff – so there's a lot of mind-boggling going on. He gives the substance of a recent blog post, Most AI Fear Is Future Fear, in which he makes the point the human culture is ever changing and will continue to do so in the future.

I don't necessarily endorse any of this discussion in particular, though I do think AI doom is more mythical than real, but present this podcast as an example of "future shock," to borrow a phrase from Alvin Toffler. The process of assimilating AI is going to force us to rethink a lot of STUFF from top to bottom. This will be a tricky process and will take generations to accomplish.

We're Not Going to Die: Why Eliezer Yudkowsky is Wrong with Robin Hanson

From the YouTube show notes:

In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.

In this episode, we explore:

- Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
- The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
- Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
- A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
- Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.

Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.

Topics Covered

0:00 Intro
8:42 How Robin is Weird
10:00 Are We All Going to Die?
13:50 Eliezer’s Assumption
25:00 Intelligence, Humans, & Evolution
27:31 Eliezer Counter Point
32:00 Acceleration of Change
33:18 Comparing & Contrasting Eliezer’s Argument
35:45 A New Life Form
44:24 AI Improving Itself
47:04 Self Interested Acting Agent
49:56 Human Displacement?
55:56 Many AIs
1:00:18 Humans vs. Robots
1:04:14 Pause or Continue AI Innovation?
1:10:52 Quiet Civilization
1:14:28 Grabby Aliens
1:19:55 Are Humans Grabby?
1:27:29 Grabby Aliens Explained
1:36:16 Cancer
1:40:00 Robin’s Thoughts on Crypto
1:42:20 Closing & Disclaimers

Charlie Wilson’s War [Media Notes 91] Looking for the irony?

I’m pretty sure that I saw Charlie Wilson’s War when it came out in 2007. But I have no idea what I thought about it. I still don’t, though I watched it the other day on Netflix. It was entertaining enough and fraught with ironies. But still...

The basics: It was directed by Mike Nichols, apparently his last. Tom Hanks is Congressman Charlie Wilson, a patriotic womanizing not-quite-a-good-old-boy from Texas. Julie Roberts is, Joanne Herring, the sixth wealthiest woman in Texas, Christian, patriotic, and passionately interested in the Soviet invasion of Afghanistan. Politics has brought them together. She convinces him to fund a covert war against the Russians. Gust Avrakotos, a CIA operative, plans the operation. Over the course of the 1980s – the film is based, somewhat problematically according to the Wikipedia entry, on real event – the war eventually succeeds. The Soviets leave, metaphorical tail between virtual legs.

We know, however, that things didn’t work out too well to Afghanistan. And I don’t just mean we-in-2023, I mean we-in-2007, when the movie came out. Back then, six years after 9/11, we knew the things didn’t work out. There are hints within the film as well. Even as they’re drinking to victory, the CIA guy, Gust Avrakotos, is telling the Congressman, Charlie Wilson: “I’m gonna’ hand you a oode word classified NIE right now and it’s gonna’ tell you that the crazies have started rolling into Kandehar like it’s a fucking bathtub drain.” At the same time we hear and airplane overhead. Oh yes, the film knows what happened afterward, and wants us to know that it knows.

But still.

According to the Wikipedia article, the original screen play ended with a scene of the 9/11 World Trade Center bombings. Tom Hanks wasn’t comfortable with that, so a different ending was devised, one where Charlie Wilson was given an award by “the clandestine services” (the phrase used in the film) for his service, the first “civilian” to be so honored. The film opened with that ceremony as well.

But that ceremonial triumph wasn’t the last thing in the film. Just before the end credits roll the screen goes blank and we see this:

“These things happened. They were glorious and they changed the world...and then we fucked up the end game.” – Charlie Wilson

That’s the last word.

But still, so what? There’s a running story about a Zen master that points beyond the events depicted in the film, and we see sex and war business constantly juxtaposed and intermingled. There’s irony galore.

But the film doesn’t live up to its own material. In fact, it seems at odds with its material. The basic story is one of military and political triumph, with some sexual hijinks on the side. The hints about the “crazies,” which I assume is a reference to the Taliban, and fucking up the endgame, are there, not to indicate anything about historical complexity, but simply to telegraph to the audience: “We know! We know!” We know what?

Where's it from? Where's it go?

Sunday, April 16, 2023

Competition between Google and Microsoft in search

I’ve got doubts about competition between Google and Microsoft on artificial intelligence. Setting aside my doubts about competition as a universal economic virtue – after all, one can win by cheating – I fear it will lead to the premature release of products, as it already has in the case of Google’s Bard. On the other hand, competition in the search space, that’s different. Google has had it too easy for too long.

The problem, of course, is that competition in search is intimately bound up with competition in AI. I fear we’re in for a bumpy ride.

The NYTimes has an article about competition in search: Google Devising Radical Search Changes to Beat Back A.I. Rivals, by Nico Grant, April 16, 2023. The article opens with Google in a panic over the possibility that “Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.” Apple has a similar contract that ends this year.

A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.

The new features, under the project name Magi, are being created by designers, engineers and executives working in so-called sprint rooms to tweak and test the latest versions. The new search engine would offer users a far more personalized experience than the company’s current service, attempting to anticipate users’ needs.

Lara Levin, a Google spokeswoman, said in a statement that “not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new A.I.-powered features to search, and will share more details soon.”

Billions of people use Google’s search engine every day for everything from finding restaurants and directions to understanding a medical diagnosis, and that simple white page with the company logo and an empty bar in the middle is one of the most widely used web pages in the world. Changes to it would have a significant impact on the lives of ordinary people, and until recently it was hard to imagine anything challenging it.

There’s much more in the article.