Wednesday, June 19, 2019
In May of this year John Brockman hosted one of those high-class gab fests he loves so much. This one was one the theme of Possible Minds (from this book). Here's a talk by Rodney Brooks, with comments by various distinguished others, on the theme "The Cul-de-Sac of the Computational Metaphor". Brooks opens:
I’m worried that the crack cocaine of Moore’s law, which has given us more and more computation, has lulled us into thinking that that’s all there is. When you look at Claus Pias’s introduction to the Macy Conferences book, he writes, "The common precondition of the three foundational concepts of cybernetics—switching (Boolean) algebra, information theory and feedback—is digitality." They go straight into digitality in this conference. He says, "We considered Turing’s universal machine as a 'model' for brains, employing Pitts' and McCulloch’s calculus for activity in neural nets." Anyone who has looked at the Pitts and McCulloch papers knows it's a very primitive view of what is happening in neurons. But they adopted Turing’s universal machine.How did Turing come up with Turing computation? In his 1936 paper, he talks about a human computer. Interestingly, he uses the male pronoun, whereas most of them were women. A human computer had a piece of paper, wrote things down, and followed rules—that was his model of computation, which we have come to accept.[Note that Turing came up with his concept of computational process by abstracting over what he observed humans do while calculating. It's an abstracted imitation of a human activity.– B.B.]We’re talking about cybernetics, but in AI, in John McCarthy’s 1955 proposal for the 1956 AI Workshop at Dartmouth, the very first sentence is, "We propose a study of artificial intelligence." He never defines artificial intelligence beyond that first sentence. That’s the first place it’s ever been used. But the second sentence is, "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." As a materialist reductionist, I agree with that.The second paragraph is, "If a machine can do a job, then an automatic calculator can be programmed to simulate the machine." That’s a jump from any sort of machine to an automatic calculator. And that’s in the air, that’s what we all think. Neuroscience uses computation as a metaphor, and I question whether that’s the right set of metaphors. We know computation is not enough for everything. Classical computation cannot handle quantum information processing.
Note the opposition/distinction between classical computing and quantum information processing: classical|quantum, computing|information processing. Of course quantum computing is all the rage in some quarter as it promises enormous through-put.
Various people interrupt with observations about those initial remarks. Note this one from Stephen Wolfram: "The formalism of quantum mechanics, like the formalism of current classical mechanics, is about real numbers and is not similar to the way computation works."
What's computation? Brooks notes:
Who is familiar with Lakoff and Johnson’s arguments in Metaphors We Live By? They talk about how we think in metaphors, which are based in the physical world in which we operate. That’s how we think and reason. In Turing’s computation, we use metaphors of place, and state, and change of state at place, and that’s the way we think about computation. We think of it as these little places where we put stuff and we move it around. That’s our vision of computation.
One example where, Brooks claims, the computer metaphor doesn't work very well:
Here’s another example: Where did neurons come from? If you go back to very primitive creatures, there was electrical transmission across surfaces of cells, and then some things managed to transmit internally in the axons. If you look at jellyfish, sometimes they have totally separate neural networks of different neurons and completely separate networks for different behaviors.For instance, one of the things that neurons work out well for jellyfish is how to synchronize their swimming. They have a central clock generator, the signal gets distributed on the neurons, but there are different transmission times from the central clock to the different parts of the creature. So, how do they handle that? Well, different species handle it in different ways. Some use amazingly fast propagation. Others, because the spikes attenuate as they go a certain distance, there is a latency, which is inversely proportional to the signal strength. So, the weaker the signal strength, the quicker you operate, and that’s how the whole thing synchronizes.Is information processing the right metaphor there? Or are control theory and resonance and synchronization the right metaphor? We need different metaphors at different times, rather than just computation. Physical intuition that we probably have as we think about computation has served physicists well, until you get to the quantum world. When you get to the quantum world, that physical intuition about stuff and place gets in the way.
A bit later Brooks notes: "A lot of what we do in computation and in physics and in neuroscience is getting stuck in these metaphors."
A bit later Brooks notes:
I pointed out in the note to John [Brockman] about a recent paper titled "Could a Neuroscientist Understand a Microprocessor?" I talked about this many years ago. I speculated that if you applied the ways neuroscientists work on brains, with probes, and look at correlations between signals and applied that to a microprocessor without a model of the microprocessor and how it works, it would be very hard to figure out how it works.There’s a great paper in PLOS last year where they took a 6502 microprocessor that was running Donkey Kong and a few other games and did lesion studies on it, they put probes in. They found the Donkey Kong transistors, which if you lesioned out 98 of the 4,000 transistors, Donkey Kong failed, whereas different games didn’t fail with those same transistors. So, that was localizing Donkey Kong-ness in the 6502.They ran many experiments, similar to those run in neuroscience. Without an underlying model of what was going on internally, it came up with pretty much garbage stuff that no computer scientist thinks relevant to anything. It’s breaking abstraction. That’s why I’m wondering about where we can find new abstractions, not necessarily as different as quantum mechanics or relativity is from normal physics, but are there different ways of thinking that are not extremely mind-breaking that will enable us to do new things in the way that computation and calculus enables us to do new things?
Tuesday, June 18, 2019
David Chalmers tells us how computers will eclispe is in the future – aka where do these people get this stuff? Philosophy as fan servce.
Prashnath Ramakrishna interviews David Chalmers for the NYTimes:
D.C.: Deep learning is great for things we do perceptually as human beings — image recognition, speech recognition and so on. But when it comes to anything requiring autonomy, reasoning, decisions, creativity and so on, A.I. is only good in limited domains. It’s pretty good at playing games like Go. The moment you get to the real world, though, things get complicated. There are a lot of mountains we need to climb before we get to human-level A.G.I. That said, I think it’s going to be possible eventually, say in the 40-to-100-year time frame.
Once we have a human-level artificial intelligence, there’s just no doubt that it will change the world. A.G.I.s are going to be beings with powers initially equivalent to our own and before long much greater than our own. [...]
I value human history and selfishly would like it to be continuous with the future. How much does it matter that our future is biological? At some point I think we must face the fact that there are going to be many faster substrates for running intelligence than our own. If we want to stick to our biological brains, then we are in danger of being left behind in a world with superfast, superintelligent computers. Ultimately, we’d have to upgrade.
The other way it could go is that new artificial intelligences take over the world and there’s no place for humanity. Maybe we’re relegated to some virtual world or some designated part of the physical world. But you’re right, it would be a second-class existence. At the very least maybe they keep us around as pets or for entertainment or for history’s sake. That would be a depressing outcome. Maybe they’d put us in virtual worlds, we’d never know, and we’d forget all this stuff. Maybe it’s already happened and we’re living in one of those virtual worlds now. Hey, it’s not so bad.
I suppose he really believes this. But why? This is just an interview. No doubt he's run over this ground in greater rigor in some of his publications, as have many others. But how much rigor is possible with this kind of material. Not much, not much at all. It's mostly fantasy. And he's mostly playing to his fans.
And then there's the idea that we're all living in a simulation:
D.C.: This goes back a long way in the history of philosophy. René Descartes said, “How do you know you’re not being fooled by an evil demon right now into thinking this is real when none of it’s real?” Descartes’ evil-demon question is kind of like the question of a virtual reality. The modern version of it is, “How do you know you’re not in the matrix? How do you know you’re not in a computer simulation where all this seems real but none of it is real?” It’s easy for even a movie like “The Matrix” to pump the intuition in you that “this is evil. This isn’t real. No, this is all fake.”
The view that virtual reality isn’t real stems from an outmoded view of reality. In the Garden of Eden, we thought that there was a primitively red apple embedded in a primitive space and everything is just as it seems to be. We’ve learned from modern science that the world isn’t really like that. A color is just a bunch of wavelengths arising from the physical reflectance properties of objects that produce a certain kind of experience in us. Solidity? Nothing is truly solid out there in the world. Things are mostly empty space, but they have the causal powers to produce in us the experience of solidity. Even space and time are gradually being dissolved by physics, or at least being boiled down to something simpler.
Physical reality is coming to look a lot like virtual reality right now. You could take the attitude, “So much the worse for physical reality. It’s not real.” But I think, no. It turns out we just take all that on board and say, “Fine, things are not the way we thought, but they’re still real.” That should be the right attitude toward virtual reality as well. Code and silicon circuitry form just another underlying substrate for reality. Is it so much worse to be in a computer-generated reality than what contemporary physics tells us? Quantum wave functions with indeterminate values? That seems as ethereal and unsubstantial as virtual reality. But hey! We’re used to it.
P.R.: I’m wondering whether it’s useful to say that virtual reality isn’t simply an alternate reality but is rather a sub-reality of the one we normally occupy.
D.C.: That I think is fair. It’s kind of a multiverse. None of this is saying there’s no objective reality. Maybe there’s an objective cosmos encompassing everything that exists. But maybe there’s a level-one cosmos and people create simulations and virtual realities within it. Maybe sometimes there are simulations within simulations. Who knows how many levels there are?
I once speculated that we’re at level 42. Remember that in “The Hitchhiker’s Guide to the Galaxy” they programmed a computer to find the answer to the ultimate question of life, the universe, everything. Then, after years, the computer said, “The answer is 42.” What question could possibly be important enough that this could be the ultimate question and the answer could be a simple number? Well, maybe the question was “What level of reality are we at?”
More fan service.
Our brains appear to operate near a critical point where it easy to shift bw diff states. Whether this is adaptive & what the states are has not been clear. New work discussed in a nice @QuantaMagazine essay sggsts brains are poised bw coherence & disorder https://t.co/vwYFwxg3Fo— Jessica Flack (@C4COMPUTATION) June 18, 2019
Charlie Wood, Do Brains Operate at a Tipping Point? New Clues and Complications, Quanta Magazine, June 10, 2019.
A team of Brazilian physicists analyzing the brains of rats and other animals has found the strongest evidence yet that the brain balances at the brink between two modes of operation, in a precarious yet versatile state known as criticality. At the same time, the findings challenge some of the original assumptions of this controversial “critical brain” hypothesis. [...]
In the 1990s, the physicist Per Bak hypothesized that the brain derives its bag of tricks from criticality. The concept originates in the world of statistical mechanics, where it describes a system of many parts teetering between stability and mayhem. Consider a snowy slope in winter. Early-season snow slides are small, while blizzards late in the season may set off avalanches. Somewhere between these phases of order and catastrophe lies a particular snowpack where anything goes: The next disturbance could set off a trickle, an avalanche or something in between. These events don’t happen with equal likelihood; rather, small cascades occur exponentially more often than larger cascades, which occur exponentially more often than those larger still, and so on. But at the “critical point,” as physicists call the configuration, the sizes and frequencies of events have a simple exponential relationship. Bak argued that tuning to just such a sweet spot would make the brain a capable and flexible information processor.
A bit later:
When the team looked in detail at where the critical point fell, however, they found that the rat brains weren’t balanced between phases of low and high neuronal activity, as predicted by the original critical brain hypothesis; rather, the critical point separated a phase in which neurons fired synchronously and a phase characterized by largely incoherent firing of neurons. This distinction may explain the hit-or-miss nature of past criticality searches. “The fact that we have reconciled the data from earlier research really points to something more general,” said Pedro Carelli, Copelli’s colleague and a coauthor of the research, which appeared in Physical Review Letters in late May.
But an anesthetized brain is not natural, so the scientists repeated their analysis on public data describing neural activity in free-roaming mice. They again found evidence that the animals’ brains sometimes experienced criticality satisfying the new gold standard from 2017. However, unlike with the anesthetized rats, neurons in the mice brains spent most of their time firing asynchronously — away from the alleged critical point of semi-synchronicity.
Copelli and Carelli acknowledge that this observation poses a challenge to the notion that the brain prefers to be in the vicinity of the critical point. But they also stress that without running the awake-animal experiment themselves (which is prohibitively expensive), they can’t conclusively interpret the mouse data. Poor sleep during the experiment, for instance, could have biased the animals’ brains away from criticality, Copelli said.
They and their colleagues also analyzed public data on monkeys and turtles. Although the data sets were too limited to confirm criticality with the full three-exponent relationship, the team calculated the ratio between two different power-law exponents indicating the distributions of avalanche sizes and durations. This ratio — which represents how quickly avalanches spread out — was always the same, regardless of species and whether the animal was under anesthesia. “To a physicist, this suggests some kind of universal mechanism,” Copelli said.
Monday, June 17, 2019
Read the whole thread:
Here's the article's abstract:
Humans can decipher adversarial images! Our new work (out TODAY in @NatureComms) shows that people can do "theory of mind" on machines—predicting how machines will see the bizarre images that "fool" them.— Chaz Firestone (@chazfirestone) March 22, 2019
Full data & code: https://t.co/3qYTvxp7kE pic.twitter.com/4gohIv3pBX
Here's the article's abstract:
Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are “fooled” by adversarial examples—nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine’s classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine’s preferred label over relevant foils—even for images described as “totally unrecognizable to human eyes”. Human intuition may be a surprisingly reliable guide to machine (mis)classification—with consequences for minds and machines alike.
This is a fascinating and, I believe, important result.
From the tweet stream
We show that this is indeed the case! We showed human subjects images from many adversarial attacks, and made them guess how machines classified them — a "machine theory-of-mind" task. We found that, more often than not, humans can figure out how machines will see these images!— Chaz Firestone (@chazfirestone) March 22, 2019
Sunday, June 16, 2019
Consider what we might call the “traditional” one-hour TV drama, such as any in the Star Trek franchise. Each episode is a story independent of the others, with the end tied more or less “tightly” (whatever that means) to the beginning. This is true of, say, NCIS as well, and of Bonanza, a much older show that I watched in my youth. Occasionally we would have a story-line that is tightly linked across two episodes, but this is relatively unusual – sometimes this device would be used to link the end of one season to the beginning of the next. And occasionally we would have a loose story “arc” over several consecutive episodes. But there is no strong causal relationship between the first and final episodes in such a series. The last episode is simply the last episode.
And of course, in many cases the series is terminated because of poor ratings. In more fortunate cases the produces choose to end the show because, perhaps they see the audience disappearing, or because, you know, enough is enough. But in either case, there is no strong causal connection more or less directly between the first episode in the series and the last one. Each episode is an independent story and the first and last episodes are not exceptions. Except in the cases of double-episodes, and perhaps a story arc or three, one could watch the episodes in any order.
The Sopranos is different. It premiered in January, 1999, and ran for 86 episodes over six seasons and it told a more or less continuous story over the whole series. The final episode, Made in America, aired on June 10, 2007. During the season Tony had killed his nephew Christopher, whom he had been grooming to take over the operation, and the conflict between the New Jersey the New York mobs had reached a crisis point, with top men on both sides being murdered. Tony is alone and hunted. He gathers his family, his wife and two children, at a diner to have dinner, the screen goes black, and it’s over. The story just stopped.
And because of that the ending was controversial. Nothing was resolved. But what could resolution have possibly been in this story? It had long been clear that Tony was Tony and, despite his years of therapy, he wasn’t going to change. If he gets killed in the mob war, so what? If he wins and takes over New York, so what? If the FBI corrals them all, so what? And, yes, we have his wife an children, but so what?
Given the importance that eating has in the series – family dinner on Sunday, meals on ceremonial occasions, eating in the office at the back of the Bada Bing club, and so forth – one can see a loose kind of closure in the fact that the family was gathering for a meal. But that’s it. It resolves nothing. It just brings the ongoing narrative to a halt.
As such its quite different from the plotting we’re used to in novels, where we get one story from beginning to end with the end and beginning more or less tightly coupled to one another. The end resolves a conflict or closes a gap that existed at the beginning. And yet I found the ending of The Sopranos to be satisfying.
Why? Or, if you will, given that I’ve read many novels, and even written about a few of them, why is it that I find this rather different disposition of narrative energies to be satisfying. But then, I’ve watched a long of TV as well.
This, it seems to me, is a formal issue, though somewhat different from the formal issues that have most concerned me. How do we evaluate how tightly episodes are coupled with one another? How do we describe and examine the causal structure of a narrative?
An exercise for the reader: Consider The Wire, which had five seasons between January, 2002, and June, 2008. Like The Sopranos it consists of one-hour episodes with a continuous story over the episodes. But it introduces a new element with each season. The first season establishes a basic narrative of police vs. drug dealers; this continues throughout the season. But the second season focuses on the port, and the desperate attempt of a union boss to keep it alive. In the third season an imaginative police commander experiments with zones in which drug dealing will be permitted, thus freeing his men to deal with other crimes. The fourth episodes moves to the school system and the fifth and final system looks at The Baltimore Sun, a (once?) distinguished paper that had H. L. Mencken on staff.
The final episode concludes with a montage that shows us (some of the) the fate of at least some of the characters. But conclusion? Baltimore will keep on keeping on, there will be drugs, the police and the government will remain corrupt, the school system struggling, the port empty, and the newspaper dying. So, the action didn’t just stop, as in The Sopranos. But this hardly counts as a resolution. It’s just a different want to stop moving forward.
For extra credit: What about Deadwood? The show was simply dropped after three seasons (2004-2006), with no chance to plan a last season and a last episode. What happened? How does that halting feel different from those in The Sopranos and The Wire?
But then David Milch (creator, producer, and writer) was able to talk HBO into letting him do a two-hour movie to bring things to a more satisfying close. I’ve not seen it, but for those who have, how satisfying was it?
Cynthia Haven has a post about a recent meeting of Standford's Faculty Senate concerning the future of Stanford University Press.
He [graduate student Jason Beckman] referred to meeting some graduate students had with Provost Drell on May 2, about how to interpret the standing of the humanities and social sciences at Stanford in the wake of her decision to effectively defund the press. He said she discussed on the two types of degrees that she believes will serve our society going forward. “First, social science degrees buttressed by data proficiency and computer science skills, and second, humanities degrees, which are the ‘best equipped’ to deal with post-human concerns in a world of proliferating robotics and artificial intelligence. Far from assuaging our concerns, such a response only reaffirmed that this institution does indeed marginalize the humanities and social sciences—which seem to have value only insofar as they support STEM fields. A university that stands behind and supports all of its scholars and students, and that values scholarship itself, should not position itself as openly hostile or indifferent to certain kinds of scholarship. We find that bias clearly manifest in the Provost’s initial decision to decline the budget request from SUP, a decision that would devastate the press. ”
Jill Lepore reviews seven new books on the moon landing, Why Did the Moon Landing Matter? (NYTimes). She concludes:
One small step for man, one giant leap for mankind. The lasting legacy of the voyage to the moon lies in the wonder of discovery, the joy of knowledge, not the gee-whizzery of machinery but the wisdom of beauty and the power of humility. A single photograph, the photograph of Earth taken from space by William Anders, on Apollo 8, in 1968, served as an icon for the entire environmental movement. People who’ve seen the Earth from space, not in a photograph but in real life, pretty much all report the same thing. “You spend even a little time contemplating the Earth from orbit and the most deeply ingrained nationalisms begin to erode,” Carl Sagan once described the phenomenon. “They seem the squabbles of mites on a plum.” This experience, this feeling of transcendence, is so universal, among the tiny handful of people who have ever felt it, that scientists have a term for it. It’s called the Overview Effect. You get a sense of the whole. Rivers look like blood. “The Earth is like a vibrant living thing,” the Chinese astronaut Yang Liu thought, when he saw it. It took Alan Shepard by surprise. “If somebody’d said before the flight, ‘Are you going to get carried away looking at the Earth from the moon?’ I would have said, ‘No, no way.’ But yet when I first looked back at the Earth, standing on the moon, I cried.” The Russian astronaut Yuri Artyushkin put it this way: “It isn’t important in which sea or lake you observe a slick of pollution or in the forests of which country a fire breaks out, or on which continent a hurricane arises. You are standing guard over the whole of our Earth.”
That’s beautiful. But here’s the hitch. It’s been 50 years. The waters are rising. The Earth needs guarding, and not only by people who’ve seen it from space. Saving the planet requires not racing to the moon again, or to Mars, but to the White House and up the steps of the Capitol, putting one foot in front of the other.
Apollo 11 in Real Time is live! Relive the first landing on the Moon for #Apollo50th— Ben Feist (@BenFeist) June 15, 2019
Includes all film footage, TV broadcasts, photographs, every word spoken, and more, including 11,000 hours of Mission Control audio never before made publicly availablehttps://t.co/PyMjtxWeRz pic.twitter.com/evLmH2U3EV
Friday, June 14, 2019
I don’t know whether or not I saw Network when it was originally released in 1976, nor for that matter, whether I watched it in whole or in part on TV sometime during the intervening four decades. Whatever the case, I watched it last night on Netflix. WOW. I can see why it won four Oscars, three for acting and one for Paddy Chayefsky’s screenplay.
And is plays interestingly against the current Trump and social media era.
The news anchor for UBS, Howard Beale, is to be dropped because of declining ratings. So he goes on air the promises that we will commit suicide on air in a subsequent broadcast. The network brass decide to drop him immediately rather than allow him to finish out his two weeks. Until...they learn that ratings went through the roof. They decide to bring him back and let him rant. He delivers a line that becomes his catchphrase and which has broken free of the film itself to enter pop culture if only in a minor way: “I’m as mad as hell, and I'm not going to take this anymore!” [Cue Trump and his promise to make American great again.]
It takes awhile to get things adjusted, but the result is The Howard Beale Show, which includes on segment where Beale goes ballistic in front of a stained glass window and then slumps to the floor, but also features a segment involving a fortune teller, and so forth. The news has now been explicitly transformed into entertainment. At one point Beale discovers that CCA (Communications Corporation of America), the conglomerate which bought UBS, is itself going to be bought out by Saudi Arabians. [Remember, OPEC (Organization of Arab Petroleum Exporting Countries) had declared an oil embargo in 1973 in retaliation for the Yom Kippur War, and that put the Arab nations on the media radar screen in a big way. For that embargo hiked the cost of gas, killing the gas guzzler, and so reached directly into people’s pockets.] He goes on air, rants against the deal, and urges his listeners to send telegrams to the White House expressing opposition to the deal. Which they do.
This sends the UBS and CCA brass into a panic because they need the deal to go through for financial reasons. Arthur Jensen, chairman of CCA, sits Beale down and lectures him on his vision of the world. “describing the interrelatedness of the participants in the international economy and the illusory nature of nationality distinctions” (from the Wikipedia article). This is a world in which the interests of multi-national corporations rule over all – the world of globalization that would become US foreign policy in subsequent years? Beale buys in, hook, line, and sinker, but his audience doesn’t like his dehumanizing globalist message. Ratings begin to drop.
UBS wants to drop him, but Jensen insists on keeping him on the air. So, the network brass are caught between a rock, declining ratings, and a hard place, orders to keep the source of those ratings on the air. What to do?
Well, they decide to get tricky. Back when they were hatching The Howard Beale Show they also hatched The Mao Tse-Tung Hour, a docudrama featuring footage in which the Ecumenical Liberation Army commits terrorist acts – recall that Patty Hearst was abducted by the Symbionese Liberation Army in 1974, an event mentioned in the film. The brass go through back channels to arrange Beale’s on-air assassination by the Ecumenicals. And that more or less ends the film.
[And now we live in a world where people get killed on social media.]
That’s not quite all there is to the film. These media transgressions and divagations play out against the backdrop of a June-October romance (she’s no starlet and he’s no rich old banker) between two executive, the young woman who helms the transformation of the news into entertainment and the middle-aged man who used to run the news.
An exercise for the reader: The elements from this pre-internet pre-social media world and translate them into the world political and media environment. Interesting, sure. Prescient? Scary?
Note: This is episode 9 on Netflix, where the first two episodes are combined into a single numbered episode.
Broadcast March 9, 1993
I’ve been watching a lot of episodes from Star Trek franchises in the last few months, most of them in fact. And I’ve watched a great deal else besides. It’s my impression that, more than any other series or franchise, Star Trek presents many episodes as puzzles or games, with more or less explicit conventions or even rules. These conventions/rules always have an element of explicit logic to them.
One type of episode is one in which some of the continuing characters become pieces in a game played or manipulated by characters specific to the episode. Move Along Home is one such episode.
The episode is set on the space station, DS9, as it is visited by the Wade, the first species from the Gamma Quadrant to visit DS9. Upon boarding DS9 the Wadi head straight for Quark’s casino where Quark engages them in Dabo, which they master and then proceed to clean Quark out. They catch him and force them to play a game they brought with them, Chula.
They refuse to tell Quark the rules. FWIW the game appears to be played on a pyramidal set of layers and involves four game pieces which move about the layers according to Quark’s through of dice-like counters. Once the game is under way we learn that four senior officers of DS9 have been transported into an abstract world that seems rather game like.
Which it is. What they do in this world is motivated by/constrained by the moves Quark makes at Chula. The episode moves back an forth between Quark playing Chula and the officers (Sisko, Kira, Dax, and Bashir) in their abstract world. The four slowly figure out they’re in a game and Quark comes to realize that the moves he makes in the game are being played out by those four officers, who have apparently left the station without a trace. And then things collapse. The game is over and the four officers appear in Quarks. The Wadi leave and Sisko has a talk with Quark.
I have, of course, omitted all the details of gameplay etc. My purpose here is simply to register this as one example of a type of episode the occurs in Star Trek. What’s the point, if I may, of this kind of episode? What's it trying to achieve? By way of comparison, mysteries are also explicitly about puzzle solving – who done it? But they’re quite different in construction and import, no?
Addendum 6.15.19: I’m thinking I probably short-changed Quark. When he finally realizes that his game’s play may well cost the lives of the players he begs for their lives and pledges that he’ll never again cheat if they’re spared. Odo hears this pledge and at the end, tells Sisko to talk with Quark about what he’d said, though Odo doesn’t mention the pledge itself. We don’t year the pledge and, of course, we know the Quark will go on cheating. These is, of course, a sense in which the Wadi are dishonest as well, for they didn’t indicate the ‘dark side’ of their game when they brought it out.
Thursday, June 13, 2019
Rafael Núñez, Michael Allen, Richard Gao, Carson Miller Rigoli, Josephine Relaford-Doyle and Arturs Semenuks, What happened to cognitive science?, Nature Human Behaviour (2019)
AbstractWhile I hopped on board cognitive science during my graduate years – I filed a dissertation on Cognitive Science and Literary Theory in 1978 – I can't say I find this surprising.
More than a half-century ago, the ‘cognitive revolution’, with the influential tenet ‘cognition is computation’, launched the investigation of the mind through a multidisciplinary endeavour called cognitive science. Despite significant diversity of views regarding its definition and intended scope, this new science, explicitly named in the singular, was meant to have a cohesive subject matter, complementary methods and integrated theories. Multiple signs, however, suggest that over time the prospect of an integrated cohesive science has not materialized. Here we investigate the status of the field in a data-informed manner, focusing on four indicators, two bibliometric and two socio-institutional. These indicators consistently show that the devised multi-disciplinary program failed to transition to a mature inter-disciplinary coherent field. Bibliometrically, the field has been largely subsumed by (cognitive) psychology, and educationally, it exhibits a striking lack of curricular consensus, raising questions about the future of the cognitive science enterprise.
From the discussion:
The cognitive science enterprise faced, from the start, substantial challenges to integration. Over the decades, things became ever more elusive. Steady challenges to the fundamental tenets of the field, failures in classical artificial intelligence handling common sense and everyday language, major difficulties in integrating cultural variation and anthropology, as well as developments in brain research, genomics and evolutionary sciences seem to have gradually turned the enthusiastic initial common effort into a rather miscellaneous collection of academic practices that no longer share common goals and paradigms. Indeed, in scientometrics, unlike successful cases of interdisciplinary integration such as biochemistry, cognitive science has been referred to as the textbook case of failed interdisciplinarity and disappearance.
This failed integration has also been aggravated by the fact that over the years the term ‘cognitive’ has become highly polysemous and theoretically loaded, even in inconsistent ways. For instance, in cognitive psychology it primarily denotes information-processing psychology, following influential work in the 1960s76 that saw cognitive science as essentially the marriage between psychology and artificial intelligence, in which neuroscience and the study of culture played virtually no role. Thus, cognitive psychology doesn’t just designate a subfield of psychology that studies cogni-tion and intelligence. Rather, it usually refers to a specific theoretical approach and research program in psychology. As a consequence, research on thought, language and reasoning based on, say, the work of Jean Piaget or Lev Vygotsky—who studied the psychology of thought, reasoning, and language—is normally not considered cognitive psychology. Indeed, in recent cognitive psychology textbooks, the work of these great pioneers is not even mentioned. When attached to linguistics, ‘cognitive’ denotes an entirely different thing. ‘Cognitive linguistics’ refers to a specific field that emerged in the 1980s as an explicit alternative to Chomskian linguistics, defending the view that language is not a special-purpose module but is governed by general principles of cognition and conceptualization. Thus, the term ‘cognitive’ in cognitive linguistics designates a school in linguistics that it is fundamentally opposed to—and inconsistent with—Chomskian linguistics, which, with its formal treatment of language, had appealed to the computer scientists, anti-behaviourist psychologists and analytic philosophers of the 1950s and earned it a privileged founding role in cognitive science in the first place.
Another founding role was played by psychology, which, according to previous findings and the indicators analyzed here, has become decidedly overrepresented in cognitive science. But rather than being a “conquest” of the field, there seems to be a progressive disinterest on the part of other disciplines in investigating the mind in terms of the computationalist–representationalist tenets defined by the cognitive revolution.
So what do the cognitive sciences know, really?
On the one hand, there’s nothing uniting them like evolution unites the biological sciences. So one might say that they know nothing. But, a lot of work’s been done in the last half to three-quarters of a century and, really, it’s not nothing. There’s something there; in fact, quite a lot. So we have to hold these two things in the mind at the same time: At one and the same time we know a lot and we know nothing.
In the eternal search for understanding what makes us human, scientists found that our brains are more sensitive to pitch, the harmonic sounds we hear when listening to music, than our evolutionary relative the macaque monkey. The study, funded in part by the National Institutes of Health, highlights the promise of Sound Health, a joint project between the NIH and the John F. Kennedy Center for the Performing Arts that aims to understand the role of music in health.The original research:
"We found that a certain region of our brains has a stronger preference for sounds with pitch than macaque monkey brains," said Bevil Conway, Ph.D., investigator in the NIH's Intramural Research Program and a senior author of the study published in Nature Neuroscience. "The results raise the possibility that these sounds, which are embedded in speech and music, may have shaped the basic organization of the human brain."
Norman-Haignere et al., fMRI Responses to Harmonic Tones and Noises Reveal Divergence in the Functional Organization of Human and Macaque Auditory Cortex. Nature Neuroscience, June 10, 2019 DOI: 10.1038/s41593-019-0410-7
Wednesday, June 12, 2019
Asteroid mining. Gene editing. Synthetic meat. We could provide for the needs of everyone, in style. It just takes some imagination. https://t.co/cjM0JKmyAC @endlesswestco @indbio #BackedBySOSV— SOSV (@SOSVvc) June 12, 2019
Tuesday, June 11, 2019
Monday, June 10, 2019
I've been reading some interesting remarks by Daniel A. Shore on why literary studies (at least in much of the Anglophone world) abandoned linguistics, though clinging to remnants of Saussure, Jakobson and the like. His remarks are an interesting complement to, counterpoint to, or, as the case may be, antidote my own observations in Transition! The 1970s in Literary Criticism.
Consider his post, Making books, Making Language (August 3, 2018), which opens:
This post is about an asymmetry in the current disciplinary configuration of literary studies: scholars of my generation often possess detailed professional knowledge of how the books of their period of study, as physical objects, were made. Yet very few of them have an account of how the utterances in those books were made, apart from the patently inadequate notion of word choice (“words in their sites,” as Ian Hacking once put it) and, perhaps, familiarity with classical rhetorical tropes and figures. In the 60’s and 70s Linguistics provided the primary technical resource the discipline literary criticism; now that role is filled by descriptive bibliography.
How come current Shakespeareans study in great detail how a leather binding is stitched, embossed, and stamped, without having more than a rudimentary account of how the sentences written by Shakespeare were produced?
I think the most crucial part of the answer has to do with the current disciplinary organization of knowledge. Because there are whole departments of Linguistics devoted to studying language, with a faculty, major requirements, grad programs, and course offerings, and so on, the burden of studying and understanding language has been effectively offloaded. Book history, by contrast, has nowhere else to live, no disciplinary home of its own, outside of departments of Literature, History, and scattered programs in Media Studies, communications, etc. (That it also lives in libraries, archives, and rare book shops is another thing altogether.) Grad students who want to understand recent theories of how language works – how we produce and understand utterances we’ve never witnessed before, how the meaning of those utterances is composed of the meaning their parts, etc. – can take courses in Linguistics… in their spare time, with spare credits, which is to say rarely if at all. What gets offloaded gets forgotten or left out.
This disciplinary division tracked onto one of the biggest intellectual divides of the second half of the 20th century. By the latter 1960s, language took on entirely opposed functions for the opposing camps. For Foucault and the humanities work he inspired, language was the “mankiller,” the premier “positivity external to Man” that constituted human being historically, producing it as a contingent “figure,” and the goal of studying language was to detranscendentalize the claims of the human sciences. Around the same time, by contrast, the study of language in the Chomskyan paradigm became the most prestigious domain for the production of truths about universal human nature. It purported to establish the biological transcendentals and species-specific endowment of human beings abstracted away from cultural and historical difference. Once it was clear, circa 1980, that literary studies would embrace the detranscendentalizing project, claims about language beyond the cultural specificity and contingency of the lexicon became suspect.
Language itself became a divided terrain: to the linguists went grammar, generality, and universality; to humanists went words, arbitrariness, and cultural specificity. Book history lent itself rather easily to the historicizing project, or at least travelled alongside it without conflict, since the material supports for communication, publication, and distribution vary quite dramatically across periods and cultures; better still, book history compensated for the tendency to idealism of various linguistic, social, and cultural constructivisms precisely because of its hard-nosed and workmanlike empiricism (see “the new boredom”). In this sense, book history didn’t come “after theory” at all but was theory’s easy companion and (depending on what one supposes is the current status of “theory”) survivor.
Is speech rhythmic? Definitely not perfectly isochronous, but still sound quite temporally regular to me. Check out this recording (and also other clips of this drummer). https://t.co/J9m1jyNmqY— Andrew Chang (@candrew123) June 8, 2019
Big (internet) tech is under attack. Alexis Madrigal lists the players in The Atlantic, The Coalition Out to Kill Tech as We Know It. He observes:
He organizes the coalition under these headings:At a broad ideological level, two things have happened. First, the idea of cyberspace, a transnational, individualistic, largely unregulated, and free place that was not exactly located in any governmental domain, has completely collapsed. Second, the mythology of tech as the carrier of progress has imploded, just as it did for the robber barons of the late 19th century, ushering in the trust-busting era. While Big Tech companies try to establish a new reason for their privileged treatment and existence (hint: screaming “CHINA!”), they are vulnerable to attacks on their business practices that suddenly make sense.But these changes did not occur in the ether among particles of discourse. Over the past three years, an ecosystem of tech opponents has emerged and gained strength. Here’s a catalog of the coalition that has pulled tech from the South Lawn into the trenches.
- Angry Conservatives
- Disillusioned Liberal Tech Luminaries
- Antitrust Theoreticians
- Democratic Presidential Candidates
- Rank-and-File Tech Workers
- Traditional Democratic Corporate Reformers
- Privacy Advocates
- European Regulators
- The Media Industry
- The Telecom Industry
- Scholarly Tech Critics
- Oracle and Other Business-Software Companies
- Yelp and Other Consumer-Protection Organizations
- The Chinese Internet Industry
Sunday, June 9, 2019
Another classic text from 1957, Frank Rosenblatt's tech report introducing the idea of perceptrons [#DH]
Here's a link to a downloadable PDF of the report: The Perceptron — A Perceiving and Recognizing Automaton. That introduced the idea of artificial neural networks, which has revolutionized AI and CL in the last quarter century or so. That's from this page, which has several other and more recent links.
In my chronology of cog. sci. and literary theory (PDF here) I list Chomsky's Syntactic Structures, Roland Barthes' Mythologies, and Fry's Anatomy of Criticism from 1957 as well. I note as well that Norbert Weiner's Cybernetics was published almost a decade earlier, in 1948. Warren Weaver's memo, "Translation", was published in 1949; it introduced the idea of machine translation.
Consider the intellectual history crudely indicated here:
Jonathan Birch, Are kin and group selection rivals or friends? Current Biology, Volume 29, ISSUE 11, PR433-R438, June 03, 2019, https://doi.org/10.1016/j.cub.2019.01.065.
Kin selection and group selection were once seen as competing explanatory hypotheses but now tend to be seen as equivalent ways of describing the same basic idea. Yet this ‘equivalence thesis’ seems not to have brought proponents of kin selection and group selection any closer together. This may be because the equivalence thesis merely shows the equivalence of two statistical formalisms without saying anything about causality. W.D. Hamilton was the first to derive an equivalence result of this type. Yet Hamilton was aware of its limitations, and saw that, while illuminating, it papered over some biologically important distinctions. Attending to these distinctions leads to the concept of ‘K-G space’, which helps us see where the biological disagreements between proponents of kin selection and group selection really lie.
Concepts from network theory, such as the ‘clustering coefficient’ and the ‘relative density’ , can help us quantify, at any particular moment, the ‘groupiness’ of a social network — the extent to which it contains real, non-arbitrary social groups at that time. If we choose an appropriate measure and take a time-average for the whole population over one generation, we have a rough measure of the extent to which groups are ‘clearly in evidence’. We can define a quantity G that takes the value 1 when social groups are fully discrete and isolated from each other for long periods (as in the Haystacks model) and the value 0 when there is no population structure at all, with more realistic cases in between.
Meanwhile, the extent to which genetic correlation is explained by kinship can be quantified by comparing the locus-specific correlation with respect to the gene of interest to the average correlation across the entire genome, since only kinship can generate correlation at every locus. We can define a quantity K that takes the value 1 when all the correlation is whole-genome correlation and 0 when all the correlation is specific to the locus in question.
These variables lead naturally to the representational device of ‘K-G space’. A population’s place in K-G space depends on the extent to which real groups are clearly in evidence and the extent to which genetic correlation is explained by kinship. As Hamilton himself said in other words, selection in high K, low G populations seems aptly described as ‘kin selection’, whereas selection in high G, low K populations seems aptly described as ‘group selection’. In the high K, high G region we have hybrid cases that are aptly described as ‘kin-group selection’, because assortment is kin-based and groups are clearly in evidence. In these cases, there really is no meaningful debate to be had about which process is at work.
A bit further on:
What is the use of K-G space? It is an unorthodox way of thinking about the relation between kin and group selection, so there had better be some payoff for adopting this unorthodox way of thinking. Otherwise it will just lead to confusion.
The payoff, in my view, is that this representational tool helps us see what is really at stake when proponents of kin selection and group selection debate particular cases, such as the origins of eusociality or the evolution of human cooperation. These are not just non-empirical debates about which mathematical formalism we should use to describe the process. But nor are they black-and-white clashes between vastly different alternatives. They tend to be debates about where the population of interest should be located in K-G space.
Friday, June 7, 2019
I'm bumping this to the top of the queue in response to remarks by Ted Underwood on Twitter and Willard McCarty in the Humanist Discussion Group.
Mark Liberman at Language Log posted a link to an excellent article by Michael Jordan, "Artificial Intelligence — The Revolution Hasn’t Happened Yet", Medium 4/19/2018. Here are some passages.
* * * * *
Mark Liberman at Language Log posted a link to an excellent article by Michael Jordan, "Artificial Intelligence — The Revolution Hasn’t Happened Yet", Medium 4/19/2018. Here are some passages.
Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.
While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.
He goes on to observe that the issues involved are too often discussed under the rubric of "AI", which has meant various things at various times. The phrase was coined in the 1950s to denote the creation of computing technology possessing a human-like mind. Jordan calls this "human-imitative AI" and notes that it was largely an academic enterprise whose objective, the creation of "high-level reasoning and thought", remains elusive. In contrast:
The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions.
This work is often packages as machine learning (ML).
Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. [...] Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon.
One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play.
He then goes on to coin two more terms, "Intelligence Augmentation" (IA) and "Intelligent Infrastructure" (II). In the first
...computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists.
The second involves
...a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies.
And now we get to his central question:
Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.
Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems.
And he goes on to explore that theme.
...the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.
Coming to the end he makes and interesting historical observation:
It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.)
We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II.
This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives.
His concluding paragraphs:
Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be.
In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.
I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead.
Computation isn't a real theoretical paradigm, but it's probably a convenient shibboleth for one that usually goes un- or poorly articulated.— Scott B. Weingart 🤹 (@scott_bot) June 7, 2019
Joseph Risi, Amit Sharma, Rohan Shah, Matthew Connelly & Duncan J. Watts, Predicting history, Nature Human Behaviour (03 June2019)
AbstractFrom the discussion:
Can events be accurately described as historic at the time they are happening? Claims of this sort are in effect predictions about the evaluations of future historians; that is, that they will regard the events in question as significant. Here we provide empirical evidence in support of earlier philosophical arguments that such claims are likely to be spurious and that, conversely, many events that will one day be viewed as historic attract little attention at the time. We introduce a conceptual and methodological framework for applying machine learning prediction models to large corpora of digitized historical archives. We find that although such models can correctly identify some historically important documents, they tend to overpredict historical significance while also failing to identify many documents that will later be deemed important, where both types of error increase monotonically with the number of documents under consideration. On balance, we conclude that historical significance is extremely difficult to predict, consistent with other recent work on intrinsic limits to predictability in complex social systems. However, the results also indicate the feasibility of developing ‘artificial archivists’ to identify potentially historic documents in very large digital corpora.
Therefore, on balance, our results suggest that Danto was substantively correct. As the number of events being evaluated grows, successful predictions will be increasingly outnumbered by events that seem insignificant at the time, but which come to be viewed as important by future historians in part because of events that have not yet taken place. More generally, our results provide further evidence for the observation that the combination of nonlinearity, stochasticity and competition for scarce attention that is inherent to human systems poses serious difficulties for ex ante predictions—a pattern that has previously been noted in outcomes such as political events, success in cultural markets, the scientific impact of publications and the diffusion of information in social networks. Given that historical significance is typically evaluated on longer time scales than these other examples, it is especially vulnerable to unintended consequences, sensitivity to small fluctuations and reinterpretation of previous information in light of new discoveries or societal concerns. A further complication is that historical significance, even when it can be meaningfully assigned, is specific to observers whose evaluation may depend on their own idiosyncratic interests and priorities. Although we speak of history as a single entity, in reality there may be many histories, within each of which the same set of events may be recalled and evaluated differently.Addendum (8 June 2019): Compare the above with a passage from T. S. Eliot, "Tradition and the Individual Talent", 1920:
No poet, no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists. You cannot value him alone; you must set him, for contrast and comparison, among the dead. I mean this as a principle of æsthetic, not merely historical, criticism. The necessity that he shall conform, that he shall cohere, is not one-sided; what happens when a new work of art is created is something that happens simultaneously to all the works of art which preceded it. The existing monuments form an ideal order among themselves, which is modified by the introduction of the new (the really new) work of art among them. The existing order is complete before the new work arrives; for order to persist after the supervention of novelty, the whole existing order must be, if ever so slightly, altered; and so the relations, proportions, values of each work of art toward the whole are readjusted; and this is conformity between the old and the new. Whoever has approved this idea of order, of the form of European, of English literature, will not find it preposterous that the past should be altered by the present as much as the present is directed by the past. And the poet who is aware of this will be aware of great difficulties and responsibilities.
Thursday, June 6, 2019
At what level does a uniform circuitry really imply a uniform function? Maybe it's time to rethink the "the cerebellum is an internal model" doctrine. Our review in @NeuroCellPress is out today. @Brains_CAN. https://t.co/g2QD7B780a— DiedrichsenLab (@diedrichsenlab) June 6, 2019
The concluding discussion:
We have reviewed convergent evidence that highlights the functional diversity of the human cerebellum. This diversity makes the formulation of a domain-general theory of cerebellar function, at best, very challenging. It is also important to keep in mind that an algorithmic account of cerebellar function may entail multiple computational concepts and that these may differ across domains, an idea we termed multiple functionality.
The relative merits of the universal transform and multiple functionality hypotheses will, in the end, be an empirical question. For now, we think there is considerable value in carefully developing hypotheses of cerebellar function for specific cognitive domains without being limited, a priori, by the assumption that the function is somehow analogous to those established for motor control. For example, most of our hypotheses and experiments in the sensorimotor domain focus on the role of the cerebellar circuit in the adult organism. However, in the cognitive domain, the cerebellum may play a more important role in development than in mature function (Badura et al., 2018). Furthermore, although damage to the cerebellum in adulthood frequently results in rather subtle symptoms on cognitive and affective measures (Alexander et al., 2012), the same damage in the developing brain may have much more profound consequences. Thus, in cognitive and social domains, the cerebellum may help set up cortical circuitry during certain sensitive phases of development. When established, the cortical circuits may no longer require substantial cerebellum-based modulation. This hypothesis may be important for understanding why cerebellar dysfunction has been attributed to neuropsychiatric developmental disorders such as autism (Wang et al., 2014) and schizophrenia (Moberget et al., 2018), even though damage to the cerebellum in adulthood will not result in the symptoms associated with these disorders.
When exploring cerebellar function in each task domain, there are two critical issues that must be addressed. First, cerebellar activity should be studied in the context of the activity patterns in the cerebral cortex. In isolation, the study of cerebellar activity may lead to interesting punctuated insights, such as “the cerebellum represents reward” (Wagner et al., 2017). However, to gain a deeper understanding of cerebellar function, we need to compare cerebellar and cortical representations (Wagner et al., 2019). Do the pontine nuclei simply transmit information from the neocortex to the cerebellum in a non-selective manner, or are specific aspects of cortical representations emphasized and other aspects omitted? The circuitry in the pontine nuclei suggests that these subcortical nuclei can perform non-linear integration and gating of cortical input (Schwarz and Thier, 1999). Thus, the information reaching the cerebellum may differ in informative ways from the way it is represented in the neocortex.
Identifying these differences is likely to yield important insight into the role of the cerebellum. For example, if a cerebellar area is especially important in a specific phase of skill activation, then we would expect different activity time courses for the relevant cerebellar and cortical regions: disproportionately higher activity in early phases of learning when the cerebellum is involved in initial acquisition and disproportionately higher activity in later phases when it is important for the performance of automatized behaviors. To perform such experiments and analyses, a full model of cortical-cerebellar connectivity is required, allowing the researcher to identify the relevant pairs of cortical and cerebellar regions.
Second, it will be important to understand what information is carried by the climbing fiber system. According to the Marr-Albus-Ito model, the climbing fiber input specifies the “learning goal” for the cerebellar circuit and, therefore, plays a pivotal role in shaping the output of the cerebellum. Although the climbing fiber input has traditionally been assumed to represent an error signal, new evidence suggests that it may be better conceptualized as a general teaching signal that may sometimes also relate to reward rather than error (Heffley et al., 2018). At present, we have virtually no insight concerning the information content of the climbing fiber system in the “cognitive” regions of the human cerebellum. Thus, we do not know what these cerebellar circuits are being instructed to learn. Understanding the learning goal (or cost function) will likely provide an important key to understanding cerebellar function in the domain of cognition.
In summary, careful investigation of cerebellar function within well-specified task domains will provide a clearer picture of the functional diversity of this major subcortical structure. Looking across domains, we may ultimately discover a universal cerebellar transform. It is likely, however, that this computation will not be easily captured in the functional terms we can intuitively describe: ideas such as timing, automatization, prediction, error correction, or internal models. Rather, a common principle may only emerge in terms of a more abstract language describing the population dynamics of neuronal networks.
Where do the predictive techniques of machine learning come from? How do they represent and organise the social? Join us for a public talk with’s Dominique Cardon (@Karmacoma), Wed 24th April 5pm: https://t.co/oBplRQvRgZ— Digital Humanities (@kingsdh) April 19, 2019
If you follow the link you'll see this abstract:
Neurons spike back. The invention of inductive machine and the Artificial intelligence controversy - Dominique Cardon (Sciences Po Médialab)
Since 2010, machine learning based predictive techniques, and more specifically deep learning neural networks, have achieved spectacular performances in the fields of image recognition or automatic translation, under the umbrella term of “Artificial Intelligence”. But their relation to this field of research is not straightforward. In the tumultuous history of AI, learning techniques using so-called "connectionist" neural networks have long been mocked and ostracized by the "symbolic" movement. This talk retraces the history of artificial intelligence through the lens of the tension between symbolic and connectionist approaches. From a social history of science and technology perspective, it seeks to highlight how researchers, relying on the availability of massive data and the multiplication of computing power have undertaken to reformulate the symbolic AI project by reviving the spirit of adaptive and inductive machines dating back from the era of cybernetics.
The hypothesis behind this communication is that the new computational techniques used in machine learning provide a new way of representing society, no longer based on categories but on individual traces of behaviour. The new algorithms of machine learning replace the regularity of constant causes with the "probability of causes". It is therefore another way of representing society and the uncertainties of action that is emerging. To defend this argument, this communication will propose two parallel investigations. The first, from a science and technology history perspective, traces the emergence of the connexionist paradigm within artificial intelligence techniques. The second, based on the sociology of statistical categorization, focuses on how the calculation techniques used by major web services produce predictive recommendations.
This talk will be partly based on the article (in French): Cardon (Dominique), Cointet (Jean-Philippe), Mazières (Antoine), «La revanche des neurones. L’invention des machines inductives et la controverse de l’intelligence artificielle», Réseaux, n°211, 2018, pp. 173-220.
Wednesday, June 5, 2019
Dean Spears and Amit Thorat, "The Puzzle of Open Defecation in Rural India: Evidence from a Novel Measure of Caste Attitudes in a Nationally Representative Survey," Economic Development and Cultural Change 0, no. 0 (-Not available-): 000. https://doi.org/10.1086/698852
AbstractH/t Tyler Cowen.
Uniquely widespread and persistent open defecation in rural India has emerged as an important policy challenge and puzzle about behavioral choice in economic development. One candidate explanation is the culture of purity and pollution that reinforces and has its origins in the caste system. Although such a cultural account is inherently difficult to quantitatively test, we provide support for this explanation by comparing open defecation rates across places in India where untouchability is more and less intensely practiced. In particular, we exploit a novel question in the 2012 India Human Development Survey that asked households whether they practice untouchability, meaning whether they enforce norms of purity and pollution in their interactions with lower castes. We find an association between local practice of untouchability and open defecation that is robust; is not explained by economic, educational, or other observable differences; and is specific to open defecation rather than other health behavior or human capital investments more generally. We verify that practicing untouchability is not associated with general disadvantage in health knowledge or access to medical professionals. We interpret this as evidence that the culture of purity, pollution, untouchability, and caste contributes to the exceptional prevalence of open defecation in rural India.
Olivier Morin, Alberto Acerbi, Oleg Sobchuk, Why people die in novels: testing the ordeal simulation hypothesis, Palgrave Communications 5, Article number: 2 (2019)
AbstractComment: FWIW, or various reasons, color me skeptical. For one thing, the adaptive hypothesis implies that death shows up in fiction because it is something we all must confront, but that we can't prepare for by rehearsal. But is homicide more prevalent in 20th C. American novels because Americans want to prepare themselves for the threat of murder? Seems unlikely to me. I rather suspect that murder shows up because of the moral and psychological issues it raises about the murderer. Fans of The Sopranos, for example, weren't preparing for the possibility of being murdered by a mob boss. They're interested in how a mob boss thinks and feels about the murders he orders and the ones he commits.
What is fiction about, and what is it good for? An influential family of theories sees fiction as rooted in adaptive simulation mechanisms. In this view, our propensity to create and enjoy narrative fictions was selected and maintained due to the training that we get from mentally simulating situations relevant to our survival and reproduction. We put forward and test a precise version of this claim, the “ordeal simulation hypothesis”. It states that fictional narrative primarily simulates “ordeals”: situations where a person’s reaction might dramatically improve or decrease her fitness, such as deadly aggressions, or decisions on long-term matrimonial commitments. Experience does not prepare us well for these rare, high-stakes occasions, in contrast with situations that are just as fitness-relevant but more frequent (e.g., exposure to pathogens). We study mortality in fictional and non-fictional texts as a partial test for this view. Based on an analysis of 744 extensive summaries of twentieth century American novels of various genres, we show that the odds of dying (in a given year) are vastly exaggerated in fiction compared to reality, but specifically more exaggerated for homicides as compared to suicides, accidents, war-related, or natural deaths. This evidence supports the ordeal simulation hypothesis but is also compatible with other accounts. For a more specific test, we look for indications that this focus on death, and in particular on death caused by an agent, is specific to narrative fiction as distinct from other verbal productions. In a comparison of 10,810 private letters and personal diary entries written by American women, with a set of 811 novels (also written by American women), we measure the occurrence of words related to natural death or agentive death. Private letters and diaries are as likely, or more likely, to use words relating to natural or agentive death. Novels written for an adult audience contain more words relating to natural deaths than do letters (though not diary entries), but this is not true for agentive death. Violent death, in spite of its clear appeal for fiction, does not necessarily provide a clear demarcation point between fictional and non-fictional content.