Wednesday, June 19, 2019
In May of this year John Brockman hosted one of those high-class gab fests he loves so much. This one was one the theme of Possible Minds (from this book). Here's a talk by Rodney Brooks, with comments by various distinguished others, on the theme "The Cul-de-Sac of the Computational Metaphor". Brooks opens:
I’m worried that the crack cocaine of Moore’s law, which has given us more and more computation, has lulled us into thinking that that’s all there is. When you look at Claus Pias’s introduction to the Macy Conferences book, he writes, "The common precondition of the three foundational concepts of cybernetics—switching (Boolean) algebra, information theory and feedback—is digitality." They go straight into digitality in this conference. He says, "We considered Turing’s universal machine as a 'model' for brains, employing Pitts' and McCulloch’s calculus for activity in neural nets." Anyone who has looked at the Pitts and McCulloch papers knows it's a very primitive view of what is happening in neurons. But they adopted Turing’s universal machine.How did Turing come up with Turing computation? In his 1936 paper, he talks about a human computer. Interestingly, he uses the male pronoun, whereas most of them were women. A human computer had a piece of paper, wrote things down, and followed rules—that was his model of computation, which we have come to accept.[Note that Turing came up with his concept of computational process by abstracting over what he observed humans do while calculating. It's an abstracted imitation of a human activity.– B.B.]We’re talking about cybernetics, but in AI, in John McCarthy’s 1955 proposal for the 1956 AI Workshop at Dartmouth, the very first sentence is, "We propose a study of artificial intelligence." He never defines artificial intelligence beyond that first sentence. That’s the first place it’s ever been used. But the second sentence is, "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." As a materialist reductionist, I agree with that.The second paragraph is, "If a machine can do a job, then an automatic calculator can be programmed to simulate the machine." That’s a jump from any sort of machine to an automatic calculator. And that’s in the air, that’s what we all think. Neuroscience uses computation as a metaphor, and I question whether that’s the right set of metaphors. We know computation is not enough for everything. Classical computation cannot handle quantum information processing.
Note the opposition/distinction between classical computing and quantum information processing: classical|quantum, computing|information processing. Of course quantum computing is all the rage in some quarter as it promises enormous through-put.
Various people interrupt with observations about those initial remarks. Note this one from Stephen Wolfram: "The formalism of quantum mechanics, like the formalism of current classical mechanics, is about real numbers and is not similar to the way computation works."
What's computation? Brooks notes:
Who is familiar with Lakoff and Johnson’s arguments in Metaphors We Live By? They talk about how we think in metaphors, which are based in the physical world in which we operate. That’s how we think and reason. In Turing’s computation, we use metaphors of place, and state, and change of state at place, and that’s the way we think about computation. We think of it as these little places where we put stuff and we move it around. That’s our vision of computation.
One example where, Brooks claims, the computer metaphor doesn't work very well:
Here’s another example: Where did neurons come from? If you go back to very primitive creatures, there was electrical transmission across surfaces of cells, and then some things managed to transmit internally in the axons. If you look at jellyfish, sometimes they have totally separate neural networks of different neurons and completely separate networks for different behaviors.For instance, one of the things that neurons work out well for jellyfish is how to synchronize their swimming. They have a central clock generator, the signal gets distributed on the neurons, but there are different transmission times from the central clock to the different parts of the creature. So, how do they handle that? Well, different species handle it in different ways. Some use amazingly fast propagation. Others, because the spikes attenuate as they go a certain distance, there is a latency, which is inversely proportional to the signal strength. So, the weaker the signal strength, the quicker you operate, and that’s how the whole thing synchronizes.Is information processing the right metaphor there? Or are control theory and resonance and synchronization the right metaphor? We need different metaphors at different times, rather than just computation. Physical intuition that we probably have as we think about computation has served physicists well, until you get to the quantum world. When you get to the quantum world, that physical intuition about stuff and place gets in the way.
A bit later Brooks notes: "A lot of what we do in computation and in physics and in neuroscience is getting stuck in these metaphors."
A bit later Brooks notes:
I pointed out in the note to John [Brockman] about a recent paper titled "Could a Neuroscientist Understand a Microprocessor?" I talked about this many years ago. I speculated that if you applied the ways neuroscientists work on brains, with probes, and look at correlations between signals and applied that to a microprocessor without a model of the microprocessor and how it works, it would be very hard to figure out how it works.There’s a great paper in PLOS last year where they took a 6502 microprocessor that was running Donkey Kong and a few other games and did lesion studies on it, they put probes in. They found the Donkey Kong transistors, which if you lesioned out 98 of the 4,000 transistors, Donkey Kong failed, whereas different games didn’t fail with those same transistors. So, that was localizing Donkey Kong-ness in the 6502.They ran many experiments, similar to those run in neuroscience. Without an underlying model of what was going on internally, it came up with pretty much garbage stuff that no computer scientist thinks relevant to anything. It’s breaking abstraction. That’s why I’m wondering about where we can find new abstractions, not necessarily as different as quantum mechanics or relativity is from normal physics, but are there different ways of thinking that are not extremely mind-breaking that will enable us to do new things in the way that computation and calculus enables us to do new things?
Tuesday, June 18, 2019
David Chalmers tells us how computers will eclispe is in the future – aka where do these people get this stuff? Philosophy as fan servce.
Prashnath Ramakrishna interviews David Chalmers for the NYTimes:
D.C.: Deep learning is great for things we do perceptually as human beings — image recognition, speech recognition and so on. But when it comes to anything requiring autonomy, reasoning, decisions, creativity and so on, A.I. is only good in limited domains. It’s pretty good at playing games like Go. The moment you get to the real world, though, things get complicated. There are a lot of mountains we need to climb before we get to human-level A.G.I. That said, I think it’s going to be possible eventually, say in the 40-to-100-year time frame.
Once we have a human-level artificial intelligence, there’s just no doubt that it will change the world. A.G.I.s are going to be beings with powers initially equivalent to our own and before long much greater than our own. [...]
I value human history and selfishly would like it to be continuous with the future. How much does it matter that our future is biological? At some point I think we must face the fact that there are going to be many faster substrates for running intelligence than our own. If we want to stick to our biological brains, then we are in danger of being left behind in a world with superfast, superintelligent computers. Ultimately, we’d have to upgrade.
The other way it could go is that new artificial intelligences take over the world and there’s no place for humanity. Maybe we’re relegated to some virtual world or some designated part of the physical world. But you’re right, it would be a second-class existence. At the very least maybe they keep us around as pets or for entertainment or for history’s sake. That would be a depressing outcome. Maybe they’d put us in virtual worlds, we’d never know, and we’d forget all this stuff. Maybe it’s already happened and we’re living in one of those virtual worlds now. Hey, it’s not so bad.
I suppose he really believes this. But why? This is just an interview. No doubt he's run over this ground in greater rigor in some of his publications, as have many others. But how much rigor is possible with this kind of material. Not much, not much at all. It's mostly fantasy. And he's mostly playing to his fans.
And then there's the idea that we're all living in a simulation:
D.C.: This goes back a long way in the history of philosophy. René Descartes said, “How do you know you’re not being fooled by an evil demon right now into thinking this is real when none of it’s real?” Descartes’ evil-demon question is kind of like the question of a virtual reality. The modern version of it is, “How do you know you’re not in the matrix? How do you know you’re not in a computer simulation where all this seems real but none of it is real?” It’s easy for even a movie like “The Matrix” to pump the intuition in you that “this is evil. This isn’t real. No, this is all fake.”
The view that virtual reality isn’t real stems from an outmoded view of reality. In the Garden of Eden, we thought that there was a primitively red apple embedded in a primitive space and everything is just as it seems to be. We’ve learned from modern science that the world isn’t really like that. A color is just a bunch of wavelengths arising from the physical reflectance properties of objects that produce a certain kind of experience in us. Solidity? Nothing is truly solid out there in the world. Things are mostly empty space, but they have the causal powers to produce in us the experience of solidity. Even space and time are gradually being dissolved by physics, or at least being boiled down to something simpler.
Physical reality is coming to look a lot like virtual reality right now. You could take the attitude, “So much the worse for physical reality. It’s not real.” But I think, no. It turns out we just take all that on board and say, “Fine, things are not the way we thought, but they’re still real.” That should be the right attitude toward virtual reality as well. Code and silicon circuitry form just another underlying substrate for reality. Is it so much worse to be in a computer-generated reality than what contemporary physics tells us? Quantum wave functions with indeterminate values? That seems as ethereal and unsubstantial as virtual reality. But hey! We’re used to it.
P.R.: I’m wondering whether it’s useful to say that virtual reality isn’t simply an alternate reality but is rather a sub-reality of the one we normally occupy.
D.C.: That I think is fair. It’s kind of a multiverse. None of this is saying there’s no objective reality. Maybe there’s an objective cosmos encompassing everything that exists. But maybe there’s a level-one cosmos and people create simulations and virtual realities within it. Maybe sometimes there are simulations within simulations. Who knows how many levels there are?
I once speculated that we’re at level 42. Remember that in “The Hitchhiker’s Guide to the Galaxy” they programmed a computer to find the answer to the ultimate question of life, the universe, everything. Then, after years, the computer said, “The answer is 42.” What question could possibly be important enough that this could be the ultimate question and the answer could be a simple number? Well, maybe the question was “What level of reality are we at?”
More fan service.
Our brains appear to operate near a critical point where it easy to shift bw diff states. Whether this is adaptive & what the states are has not been clear. New work discussed in a nice @QuantaMagazine essay sggsts brains are poised bw coherence & disorder https://t.co/vwYFwxg3Fo— Jessica Flack (@C4COMPUTATION) June 18, 2019
Charlie Wood, Do Brains Operate at a Tipping Point? New Clues and Complications, Quanta Magazine, June 10, 2019.
A team of Brazilian physicists analyzing the brains of rats and other animals has found the strongest evidence yet that the brain balances at the brink between two modes of operation, in a precarious yet versatile state known as criticality. At the same time, the findings challenge some of the original assumptions of this controversial “critical brain” hypothesis. [...]
In the 1990s, the physicist Per Bak hypothesized that the brain derives its bag of tricks from criticality. The concept originates in the world of statistical mechanics, where it describes a system of many parts teetering between stability and mayhem. Consider a snowy slope in winter. Early-season snow slides are small, while blizzards late in the season may set off avalanches. Somewhere between these phases of order and catastrophe lies a particular snowpack where anything goes: The next disturbance could set off a trickle, an avalanche or something in between. These events don’t happen with equal likelihood; rather, small cascades occur exponentially more often than larger cascades, which occur exponentially more often than those larger still, and so on. But at the “critical point,” as physicists call the configuration, the sizes and frequencies of events have a simple exponential relationship. Bak argued that tuning to just such a sweet spot would make the brain a capable and flexible information processor.
A bit later:
When the team looked in detail at where the critical point fell, however, they found that the rat brains weren’t balanced between phases of low and high neuronal activity, as predicted by the original critical brain hypothesis; rather, the critical point separated a phase in which neurons fired synchronously and a phase characterized by largely incoherent firing of neurons. This distinction may explain the hit-or-miss nature of past criticality searches. “The fact that we have reconciled the data from earlier research really points to something more general,” said Pedro Carelli, Copelli’s colleague and a coauthor of the research, which appeared in Physical Review Letters in late May.
But an anesthetized brain is not natural, so the scientists repeated their analysis on public data describing neural activity in free-roaming mice. They again found evidence that the animals’ brains sometimes experienced criticality satisfying the new gold standard from 2017. However, unlike with the anesthetized rats, neurons in the mice brains spent most of their time firing asynchronously — away from the alleged critical point of semi-synchronicity.
Copelli and Carelli acknowledge that this observation poses a challenge to the notion that the brain prefers to be in the vicinity of the critical point. But they also stress that without running the awake-animal experiment themselves (which is prohibitively expensive), they can’t conclusively interpret the mouse data. Poor sleep during the experiment, for instance, could have biased the animals’ brains away from criticality, Copelli said.
They and their colleagues also analyzed public data on monkeys and turtles. Although the data sets were too limited to confirm criticality with the full three-exponent relationship, the team calculated the ratio between two different power-law exponents indicating the distributions of avalanche sizes and durations. This ratio — which represents how quickly avalanches spread out — was always the same, regardless of species and whether the animal was under anesthesia. “To a physicist, this suggests some kind of universal mechanism,” Copelli said.
Monday, June 17, 2019
Read the whole thread:
Here's the article's abstract:
Humans can decipher adversarial images! Our new work (out TODAY in @NatureComms) shows that people can do "theory of mind" on machines—predicting how machines will see the bizarre images that "fool" them.— Chaz Firestone (@chazfirestone) March 22, 2019
Full data & code: https://t.co/3qYTvxp7kE pic.twitter.com/4gohIv3pBX
Here's the article's abstract:
Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are “fooled” by adversarial examples—nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine’s classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine’s preferred label over relevant foils—even for images described as “totally unrecognizable to human eyes”. Human intuition may be a surprisingly reliable guide to machine (mis)classification—with consequences for minds and machines alike.
This is a fascinating and, I believe, important result.
From the tweet stream
We show that this is indeed the case! We showed human subjects images from many adversarial attacks, and made them guess how machines classified them — a "machine theory-of-mind" task. We found that, more often than not, humans can figure out how machines will see these images!— Chaz Firestone (@chazfirestone) March 22, 2019
Sunday, June 16, 2019
Consider what we might call the “traditional” one-hour TV drama, such as any in the Star Trek franchise. Each episode is a story independent of the others, with the end tied more or less “tightly” (whatever that means) to the beginning. This is true of, say, NCIS as well, and of Bonanza, a much older show that I watched in my youth. Occasionally we would have a story-line that is tightly linked across two episodes, but this is relatively unusual – sometimes this device would be used to link the end of one season to the beginning of the next. And occasionally we would have a loose story “arc” over several consecutive episodes. But there is no strong causal relationship between the first and final episodes in such a series. The last episode is simply the last episode.
And of course, in many cases the series is terminated because of poor ratings. In more fortunate cases the produces choose to end the show because, perhaps they see the audience disappearing, or because, you know, enough is enough. But in either case, there is no strong causal connection more or less directly between the first episode in the series and the last one. Each episode is an independent story and the first and last episodes are not exceptions. Except in the cases of double-episodes, and perhaps a story arc or three, one could watch the episodes in any order.
The Sopranos is different. It premiered in January, 1999, and ran for 86 episodes over six seasons and it told a more or less continuous story over the whole series. The final episode, Made in America, aired on June 10, 2007. During the season Tony had killed his nephew Christopher, whom he had been grooming to take over the operation, and the conflict between the New Jersey the New York mobs had reached a crisis point, with top men on both sides being murdered. Tony is alone and hunted. He gathers his family, his wife and two children, at a diner to have dinner, the screen goes black, and it’s over. The story just stopped.
And because of that the ending was controversial. Nothing was resolved. But what could resolution have possibly been in this story? It had long been clear that Tony was Tony and, despite his years of therapy, he wasn’t going to change. If he gets killed in the mob war, so what? If he wins and takes over New York, so what? If the FBI corrals them all, so what? And, yes, we have his wife an children, but so what?
Given the importance that eating has in the series – family dinner on Sunday, meals on ceremonial occasions, eating in the office at the back of the Bada Bing club, and so forth – one can see a loose kind of closure in the fact that the family was gathering for a meal. But that’s it. It resolves nothing. It just brings the ongoing narrative to a halt.
As such its quite different from the plotting we’re used to in novels, where we get one story from beginning to end with the end and beginning more or less tightly coupled to one another. The end resolves a conflict or closes a gap that existed at the beginning. And yet I found the ending of The Sopranos to be satisfying.
Why? Or, if you will, given that I’ve read many novels, and even written about a few of them, why is it that I find this rather different disposition of narrative energies to be satisfying. But then, I’ve watched a long of TV as well.
This, it seems to me, is a formal issue, though somewhat different from the formal issues that have most concerned me. How do we evaluate how tightly episodes are coupled with one another? How do we describe and examine the causal structure of a narrative?
An exercise for the reader: Consider The Wire, which had five seasons between January, 2002, and June, 2008. Like The Sopranos it consists of one-hour episodes with a continuous story over the episodes. But it introduces a new element with each season. The first season establishes a basic narrative of police vs. drug dealers; this continues throughout the season. But the second season focuses on the port, and the desperate attempt of a union boss to keep it alive. In the third season an imaginative police commander experiments with zones in which drug dealing will be permitted, thus freeing his men to deal with other crimes. The fourth episodes moves to the school system and the fifth and final system looks at The Baltimore Sun, a (once?) distinguished paper that had H. L. Mencken on staff.
The final episode concludes with a montage that shows us (some of the) the fate of at least some of the characters. But conclusion? Baltimore will keep on keeping on, there will be drugs, the police and the government will remain corrupt, the school system struggling, the port empty, and the newspaper dying. So, the action didn’t just stop, as in The Sopranos. But this hardly counts as a resolution. It’s just a different want to stop moving forward.
For extra credit: What about Deadwood? The show was simply dropped after three seasons (2004-2006), with no chance to plan a last season and a last episode. What happened? How does that halting feel different from those in The Sopranos and The Wire?
But then David Milch (creator, producer, and writer) was able to talk HBO into letting him do a two-hour movie to bring things to a more satisfying close. I’ve not seen it, but for those who have, how satisfying was it?
Cynthia Haven has a post about a recent meeting of Standford's Faculty Senate concerning the future of Stanford University Press.
He [graduate student Jason Beckman] referred to meeting some graduate students had with Provost Drell on May 2, about how to interpret the standing of the humanities and social sciences at Stanford in the wake of her decision to effectively defund the press. He said she discussed on the two types of degrees that she believes will serve our society going forward. “First, social science degrees buttressed by data proficiency and computer science skills, and second, humanities degrees, which are the ‘best equipped’ to deal with post-human concerns in a world of proliferating robotics and artificial intelligence. Far from assuaging our concerns, such a response only reaffirmed that this institution does indeed marginalize the humanities and social sciences—which seem to have value only insofar as they support STEM fields. A university that stands behind and supports all of its scholars and students, and that values scholarship itself, should not position itself as openly hostile or indifferent to certain kinds of scholarship. We find that bias clearly manifest in the Provost’s initial decision to decline the budget request from SUP, a decision that would devastate the press. ”
Jill Lepore reviews seven new books on the moon landing, Why Did the Moon Landing Matter? (NYTimes). She concludes:
One small step for man, one giant leap for mankind. The lasting legacy of the voyage to the moon lies in the wonder of discovery, the joy of knowledge, not the gee-whizzery of machinery but the wisdom of beauty and the power of humility. A single photograph, the photograph of Earth taken from space by William Anders, on Apollo 8, in 1968, served as an icon for the entire environmental movement. People who’ve seen the Earth from space, not in a photograph but in real life, pretty much all report the same thing. “You spend even a little time contemplating the Earth from orbit and the most deeply ingrained nationalisms begin to erode,” Carl Sagan once described the phenomenon. “They seem the squabbles of mites on a plum.” This experience, this feeling of transcendence, is so universal, among the tiny handful of people who have ever felt it, that scientists have a term for it. It’s called the Overview Effect. You get a sense of the whole. Rivers look like blood. “The Earth is like a vibrant living thing,” the Chinese astronaut Yang Liu thought, when he saw it. It took Alan Shepard by surprise. “If somebody’d said before the flight, ‘Are you going to get carried away looking at the Earth from the moon?’ I would have said, ‘No, no way.’ But yet when I first looked back at the Earth, standing on the moon, I cried.” The Russian astronaut Yuri Artyushkin put it this way: “It isn’t important in which sea or lake you observe a slick of pollution or in the forests of which country a fire breaks out, or on which continent a hurricane arises. You are standing guard over the whole of our Earth.”
That’s beautiful. But here’s the hitch. It’s been 50 years. The waters are rising. The Earth needs guarding, and not only by people who’ve seen it from space. Saving the planet requires not racing to the moon again, or to Mars, but to the White House and up the steps of the Capitol, putting one foot in front of the other.
Apollo 11 in Real Time is live! Relive the first landing on the Moon for #Apollo50th— Ben Feist (@BenFeist) June 15, 2019
Includes all film footage, TV broadcasts, photographs, every word spoken, and more, including 11,000 hours of Mission Control audio never before made publicly availablehttps://t.co/PyMjtxWeRz pic.twitter.com/evLmH2U3EV
Friday, June 14, 2019
I don’t know whether or not I saw Network when it was originally released in 1976, nor for that matter, whether I watched it in whole or in part on TV sometime during the intervening four decades. Whatever the case, I watched it last night on Netflix. WOW. I can see why it won four Oscars, three for acting and one for Paddy Chayefsky’s screenplay.
And is plays interestingly against the current Trump and social media era.
The news anchor for UBS, Howard Beale, is to be dropped because of declining ratings. So he goes on air the promises that we will commit suicide on air in a subsequent broadcast. The network brass decide to drop him immediately rather than allow him to finish out his two weeks. Until...they learn that ratings went through the roof. They decide to bring him back and let him rant. He delivers a line that becomes his catchphrase and which has broken free of the film itself to enter pop culture if only in a minor way: “I’m as mad as hell, and I'm not going to take this anymore!” [Cue Trump and his promise to make American great again.]
It takes awhile to get things adjusted, but the result is The Howard Beale Show, which includes on segment where Beale goes ballistic in front of a stained glass window and then slumps to the floor, but also features a segment involving a fortune teller, and so forth. The news has now been explicitly transformed into entertainment. At one point Beale discovers that CCA (Communications Corporation of America), the conglomerate which bought UBS, is itself going to be bought out by Saudi Arabians. [Remember, OPEC (Organization of Arab Petroleum Exporting Countries) had declared an oil embargo in 1973 in retaliation for the Yom Kippur War, and that put the Arab nations on the media radar screen in a big way. For that embargo hiked the cost of gas, killing the gas guzzler, and so reached directly into people’s pockets.] He goes on air, rants against the deal, and urges his listeners to send telegrams to the White House expressing opposition to the deal. Which they do.
This sends the UBS and CCA brass into a panic because they need the deal to go through for financial reasons. Arthur Jensen, chairman of CCA, sits Beale down and lectures him on his vision of the world. “describing the interrelatedness of the participants in the international economy and the illusory nature of nationality distinctions” (from the Wikipedia article). This is a world in which the interests of multi-national corporations rule over all – the world of globalization that would become US foreign policy in subsequent years? Beale buys in, hook, line, and sinker, but his audience doesn’t like his dehumanizing globalist message. Ratings begin to drop.
UBS wants to drop him, but Jensen insists on keeping him on the air. So, the network brass are caught between a rock, declining ratings, and a hard place, orders to keep the source of those ratings on the air. What to do?
Well, they decide to get tricky. Back when they were hatching The Howard Beale Show they also hatched The Mao Tse-Tung Hour, a docudrama featuring footage in which the Ecumenical Liberation Army commits terrorist acts – recall that Patty Hearst was abducted by the Symbionese Liberation Army in 1974, an event mentioned in the film. The brass go through back channels to arrange Beale’s on-air assassination by the Ecumenicals. And that more or less ends the film.
[And now we live in a world where people get killed on social media.]
That’s not quite all there is to the film. These media transgressions and divagations play out against the backdrop of a June-October romance (she’s no starlet and he’s no rich old banker) between two executive, the young woman who helms the transformation of the news into entertainment and the middle-aged man who used to run the news.
An exercise for the reader: The elements from this pre-internet pre-social media world and translate them into the world political and media environment. Interesting, sure. Prescient? Scary?
Note: This is episode 9 on Netflix, where the first two episodes are combined into a single numbered episode.
Broadcast March 9, 1993
I’ve been watching a lot of episodes from Star Trek franchises in the last few months, most of them in fact. And I’ve watched a great deal else besides. It’s my impression that, more than any other series or franchise, Star Trek presents many episodes as puzzles or games, with more or less explicit conventions or even rules. These conventions/rules always have an element of explicit logic to them.
One type of episode is one in which some of the continuing characters become pieces in a game played or manipulated by characters specific to the episode. Move Along Home is one such episode.
The episode is set on the space station, DS9, as it is visited by the Wade, the first species from the Gamma Quadrant to visit DS9. Upon boarding DS9 the Wadi head straight for Quark’s casino where Quark engages them in Dabo, which they master and then proceed to clean Quark out. They catch him and force them to play a game they brought with them, Chula.
They refuse to tell Quark the rules. FWIW the game appears to be played on a pyramidal set of layers and involves four game pieces which move about the layers according to Quark’s through of dice-like counters. Once the game is under way we learn that four senior officers of DS9 have been transported into an abstract world that seems rather game like.
Which it is. What they do in this world is motivated by/constrained by the moves Quark makes at Chula. The episode moves back an forth between Quark playing Chula and the officers (Sisko, Kira, Dax, and Bashir) in their abstract world. The four slowly figure out they’re in a game and Quark comes to realize that the moves he makes in the game are being played out by those four officers, who have apparently left the station without a trace. And then things collapse. The game is over and the four officers appear in Quarks. The Wadi leave and Sisko has a talk with Quark.
I have, of course, omitted all the details of gameplay etc. My purpose here is simply to register this as one example of a type of episode the occurs in Star Trek. What’s the point, if I may, of this kind of episode? What's it trying to achieve? By way of comparison, mysteries are also explicitly about puzzle solving – who done it? But they’re quite different in construction and import, no?
Addendum 6.15.19: I’m thinking I probably short-changed Quark. When he finally realizes that his game’s play may well cost the lives of the players he begs for their lives and pledges that he’ll never again cheat if they’re spared. Odo hears this pledge and at the end, tells Sisko to talk with Quark about what he’d said, though Odo doesn’t mention the pledge itself. We don’t year the pledge and, of course, we know the Quark will go on cheating. These is, of course, a sense in which the Wadi are dishonest as well, for they didn’t indicate the ‘dark side’ of their game when they brought it out.
Thursday, June 13, 2019
Rafael Núñez, Michael Allen, Richard Gao, Carson Miller Rigoli, Josephine Relaford-Doyle and Arturs Semenuks, What happened to cognitive science?, Nature Human Behaviour (2019)
AbstractWhile I hopped on board cognitive science during my graduate years – I filed a dissertation on Cognitive Science and Literary Theory in 1978 – I can't say I find this surprising.
More than a half-century ago, the ‘cognitive revolution’, with the influential tenet ‘cognition is computation’, launched the investigation of the mind through a multidisciplinary endeavour called cognitive science. Despite significant diversity of views regarding its definition and intended scope, this new science, explicitly named in the singular, was meant to have a cohesive subject matter, complementary methods and integrated theories. Multiple signs, however, suggest that over time the prospect of an integrated cohesive science has not materialized. Here we investigate the status of the field in a data-informed manner, focusing on four indicators, two bibliometric and two socio-institutional. These indicators consistently show that the devised multi-disciplinary program failed to transition to a mature inter-disciplinary coherent field. Bibliometrically, the field has been largely subsumed by (cognitive) psychology, and educationally, it exhibits a striking lack of curricular consensus, raising questions about the future of the cognitive science enterprise.
From the discussion:
The cognitive science enterprise faced, from the start, substantial challenges to integration. Over the decades, things became ever more elusive. Steady challenges to the fundamental tenets of the field, failures in classical artificial intelligence handling common sense and everyday language, major difficulties in integrating cultural variation and anthropology, as well as developments in brain research, genomics and evolutionary sciences seem to have gradually turned the enthusiastic initial common effort into a rather miscellaneous collection of academic practices that no longer share common goals and paradigms. Indeed, in scientometrics, unlike successful cases of interdisciplinary integration such as biochemistry, cognitive science has been referred to as the textbook case of failed interdisciplinarity and disappearance.
This failed integration has also been aggravated by the fact that over the years the term ‘cognitive’ has become highly polysemous and theoretically loaded, even in inconsistent ways. For instance, in cognitive psychology it primarily denotes information-processing psychology, following influential work in the 1960s76 that saw cognitive science as essentially the marriage between psychology and artificial intelligence, in which neuroscience and the study of culture played virtually no role. Thus, cognitive psychology doesn’t just designate a subfield of psychology that studies cogni-tion and intelligence. Rather, it usually refers to a specific theoretical approach and research program in psychology. As a consequence, research on thought, language and reasoning based on, say, the work of Jean Piaget or Lev Vygotsky—who studied the psychology of thought, reasoning, and language—is normally not considered cognitive psychology. Indeed, in recent cognitive psychology textbooks, the work of these great pioneers is not even mentioned. When attached to linguistics, ‘cognitive’ denotes an entirely different thing. ‘Cognitive linguistics’ refers to a specific field that emerged in the 1980s as an explicit alternative to Chomskian linguistics, defending the view that language is not a special-purpose module but is governed by general principles of cognition and conceptualization. Thus, the term ‘cognitive’ in cognitive linguistics designates a school in linguistics that it is fundamentally opposed to—and inconsistent with—Chomskian linguistics, which, with its formal treatment of language, had appealed to the computer scientists, anti-behaviourist psychologists and analytic philosophers of the 1950s and earned it a privileged founding role in cognitive science in the first place.
Another founding role was played by psychology, which, according to previous findings and the indicators analyzed here, has become decidedly overrepresented in cognitive science. But rather than being a “conquest” of the field, there seems to be a progressive disinterest on the part of other disciplines in investigating the mind in terms of the computationalist–representationalist tenets defined by the cognitive revolution.
So what do the cognitive sciences know, really?
On the one hand, there’s nothing uniting them like evolution unites the biological sciences. So one might say that they know nothing. But, a lot of work’s been done in the last half to three-quarters of a century and, really, it’s not nothing. There’s something there; in fact, quite a lot. So we have to hold these two things in the mind at the same time: At one and the same time we know a lot and we know nothing.