Tuesday, December 6, 2016

What you see is what you get

The graphic computer interface that became common with Apple's Macintosh computer is sometimes known as WYSIWYG, where "WYSIWYG" = "what you see is what you get." The following chart shows an Ngram query on both "WYSIWYG" and "what you see is what" (Ngram won't handle phrases of more than five words).


You can see "WYSIWG" on the rise starting in the mid-1980s, which coincides with the MacIntosh, which was released in 1984. But "what you see is what you get" extends back into the 1970s.

What's going on?

In the fall of 1970 Flip Wilson debuted a comedy show on NBC. He was perhaps best known for playing a character named Geraldine, a sassy, brassy, black woman. Geraldine has a number of catch phrases, including "The devil made me do it" and "What you see is what you get." WYSIWIG!

Did the computer industry get the phrase from Flip Wilson, or was it independently invented? Inquiring minds want to know.

Monday, December 5, 2016

Looking up the Hudson River



Beyond Broca and Wernicke

Abstract: Broca and Wernicke are dead, or moving past the classic model of language neurobiology

With the advancement of cognitive neuroscience and neuropsychological research, the field of language neurobiology is at a cross-roads with respect to its framing theories. The central thesis of this article is that the major historical framing model, the Classic “Wernicke-Lichtheim-Geschwind” model, and associated terminology, is no longer adequate for contemporary investigations into the neurobiology of language. We argue that the Classic model (1) is based on an outdated brain anatomy; (2) does not adequately represent the distributed connectivity relevant for language, (3) offers a modular and “language centric” perspective, and (4) focuses on cortical structures, for the most part leaving out subcortical regions and relevant connections. To make our case, we discuss the issue of anatomical specificity with a focus on the contemporary usage of the terms “Broca’s and Wernicke’s area”, including results of a survey that was conducted within the language neurobiology community. We demonstrate that there is no consistent anatomical definition of “Broca’s and Wernicke’s Areas”, and propose to replace these terms with more precise anatomical definitions. We illustrate the distributed nature of the language connectome, which extends far beyond the single-pathway notion of arcuate fasciculus connectivity established in Geschwind’s version of the Classic Model. By illustrating the definitional confusion surrounding “Broca’s and Wernicke’s areas”, and by illustrating the difficulty integrating the emerging literature on perisylvian white matter connectivity into this model, we hope to expose the limits of the model, argue for its obsolescence, and suggest a path forward in defining a replacement.

“Broca and Wernicke are dead, or moving past the classic model of language neurobiology” by Pascale Tremblay and Anthony Steven Dick in Brain and Language. Published online August 30 2016 doi:10.1016/j.bandl.2016.08.004

You can find the paper here (behind a paywall):

There's a news article here:

From that article: Tremblay and Dick call for a “clean break” from the Classic Model and a new approach that rejects the “language centric” perspective of the past (that saw the language system as highly specialised and clearly defined), and that embraces a more distributed perspective that recognises how much of language function is overlaid on cognitive systems that originally evolved for other purposes.

Sunday, December 4, 2016

Neurolinguistics of Language

Here's links to a bunch of recent articles in Journal of Neurolinguistics (H/t Dan Everett). Note that all are behind a paywall, but you can at least link through to see some abstracts.  I've listed two articles (plus abstracts) that caught my attention on a quick look:

* * * * *
Volume 42, May 2017, Pages 49–62
A re-visit of three-stage humor processing with readers' surprise, comprehension, and funniness ratings: An ERP study


The roles of surprise, comprehension, and amusement levels in humor processing were examined.
Participants were divided into high/low score groups based on their behavioral ratings to verbal jokes.
Highly surprised, comprehended, and amused group elicited larger N400, P600, and LPP effects, respectively.
These intergroup variances supported the three-stage model of humor processing.


Humor processing can be divided into three sub-stages including incongruity detection, incongruity resolution, and elaboration (23 and 10). However, few studies have investigated the three-stage model of humor processing with readers' surprise, comprehensibility and funniness levels, and little discussion has been devoted to its biological underpinning. To verify the credibility of the three-stage model, electroencephalography (EEG) was utilized in corroboration with two types of stimuli including jokes and non-jokes in the present research. Participants were categorized into high vs. low score groups based on their rating scores of surprise, comprehension, and funniness to joke stimuli. The between-group analyses showed that compared with the less surprised group, highly surprised people elicited a primarily larger N400, which may suggest more incongruity perceived in reading jokes. Additionally, good comprehenders mainly elicited a larger P600, probably indicating a more successful resolution of detected incongruity in comparison with poor comprehenders. Finally, the highly amused group elicited a larger late positive potential (LPP) compared with the less amused group, which could reflect more affective elaboration of jokes. Participants' surprise, comprehension, and funniness levels had smaller impacts on other chief electrophysiological components, with the effects varying with different group contrasts. These results provided the evidence that different degrees of surprise, comprehensibility, and amusement to jokes would influence the three sub-stages (incongruity detection, incongruity resolution, and elaboration) respectively in humor processing. The current study thus generally re-verified the stability of the three-stage model through participants' behavioral ratings which had seldom been touched upon.

* * * * *
Volume 40, November 2016, Pages 112–127
Individual differences in the bilingual brain: The role of language background and DRD2 genotype in verbal and non-verbal cognitive control


Bilingual language control is associated with activity in the inferior frontal gyrus.
Non-verbal control is associated with activity in the anterior cingulate cortex.
Specific genotypes predict fMRI activity during language control and task switching.
Bilingual experience predicts fMRI activity during language control and inhibition.


Bilingual language control may involve cognitive control, including inhibition and switching. These types of control have been previously associated with neural activity in the inferior frontal gyrus (IFG) and the anterior cingulate cortex (ACC). In previous studies, the DRD2 gene, related to dopamine availability in the striatum, has been found to play a role in neural activity during cognitive control tasks, with carriers of the gene’s A1 allele showing different patterns of activity in inferior frontal regions during cognitive control tasks than non-carriers. The current study sought to extend these findings to the domain of bilingual language control. Forty-nine Spanish-English bilinguals participated in this study by providing DNA samples through saliva, completing background questionnaires, and performing a language production task (picture-naming), a non-verbal inhibition task (Simon task), and a non-verbal switching task (shape-color task) in the fMRI scanner. The fMRI data were analyzed to determine whether variation in the genetic background or bilingual language background predicts neural activity in the IFG and ACC during these three tasks. Results indicate that genetic and language background variables predicted neural activity in the IFG during English picture naming. Variation in only the genetic background predicted neural activity in the ACC during the shape-color switching task; variation in only the language background predicted neural activity in the ACC and IFG during the Simon task. These results suggest that variation in the DRD2 gene should not be ignored when drawing conclusions about bilingual verbal and non-verbal cognitive control.

Another RGB Series

Here is the original scene:


It is an object of no particular interest, a plastic milk bottle with part of its top cut off. It's on window sill and you can see its reflection in the window and you can see the screen in the window as well. It is what it is.

And then we have three versions of that scene, in red, green, and blue tones, respectively:




Why those three colors? Because those are the primary colors of video and computer monitors (RGB). I've done many such RGB series. And some of the extend beyond the basic RGB triplet.

Why do I do it? it's fun. It's an exercise. It's basic.

20161203-_IGP8323RGB Eq LoSat

Friday, December 2, 2016

Friday Fotos: Fr8s


BM 33 03.jpg




An Executive Guide to the Computer Age

William Benzon, "An Executive Guide to the Computer Age." Raymond T. Yeh and Paul B. Schneck, Co-Directors, Computer Science: Key to a Space Program Renaissance: Final Report of the 1981 NASA/ASEE Summer Study on the Use of Computer Science and Technology in NASA. University of Maryland, Computer Science Technical Report Series, No. 1168, Vol. II: F-1 - F-14, January 1982.

Abstract: Computing technologies have the potential to radically transform the way we live. Such a transformation is not inevitable, nor is it necessarily good. The purpose of this paper is to place this possibility into its proper historical perspective and to consider, in a general way, how one plans for it. The first section considers the place of the "Information Age" in history, suggesting that it is the fourth major transformation in human cultural evolution. The next section develops a five-dimensional metaphor outline certain basic factors which may be considered in a strategic plan for the use of computing technology. The final section discusses a specific aspect of that planning – the relationship between computing and productivity – and suggests that the transforming power of computing technology lies in the possibility of dramatically increasing productivity as intelligent computer becomes routine and reliable.

Wednesday, November 30, 2016

For Trump's supporters, corruption is part of the deal

But Jan-Werner Müller, a Princeton political scientist who recently published an excellent little book about authoritarian populist movements, finds that Trump supporters’ indifference to Trump’s corrupt leanings is actually rather typical. Even when clear evidence of corruption emerges once an authoritarian populist regime is in place, the regime’s key supporters are generally unimpressed.

“The perception among supporters of populists is that corruption and cronyism are not genuine problems as long as they look like measures pursued for the sake of a moral, hardworking ‘us’ and not for the immoral or even foreign ‘them,’” he writes, “hence it is a pious hope for liberals to think that all they have to do is expose corruption to discredit populists.”

George Mason University’s Justin Gest is the author of a recent study of white working-class politics in the United States and United Kingdom, and one of his major themes is that there is a pervasive cynicism about politics and government among the people he interviews.

“Today’s working class, Rust Belt voters are disenchanted by what they perceive to be a political and economic culture of exploitative greed and gridlock,” he writes, “and are waiting for someone to adopt their cause.”

Per Müller, their enthusiasm for Trump doesn’t necessarily reflect a misperception that he is honest or that he will eschew greed and corruption. Rather, their view is that he is on their side and that the protestations of his opponents merely reflect the self-interested defensiveness of the establishment. Highlighting themes of racial and ethnic conflict as central to American politics further feeds this dynamic. Trump may be a sonofabitch, the thinking goes, but at least he’s our sonofabitch.
Ignore the clown, focus on policy:
A November 22 Quinnipiac poll revealed both the risks and the opportunities currently facing Democrats. It showed that attacks on Trump’s character have set in, and most people agree that Trump is not honest and not levelheaded. But it also showed that a majority believe he will create jobs, that he cares about average Americans, and that he will bring change in the right direction. Yet at the same time, Quinnipiac also finds that most voters favor legal abortion, oppose tax cuts for the wealthy, oppose deregulation of business, and oppose weakening gun control regulation.

Which is to say that the most normal, blandly partisan parts of Trump’s agenda are also among the least popular. And yet Trump’s support for them is what immunizes him from Republican criticism and oversight over the abnormal stuff. Defending the basic norms of American constitutional government is important, but doing it as a partisan agenda won’t work — it turns off Trump’s core supporters and signals to wavering ones that his opponents are focused on abstractions rather than daily life. As long as Trump is enjoying the lockstep support of congressional Republicans, his opponents need to find ways to turn attention away from the Trump Show and focus it on his basic policy agenda and the ways in which it touches millions of people.

Tuesday, November 29, 2016

Documental "Lucumi, el Rumbero de Cuba" (Rumbero of Cuba)

This is an excellent little film (26 minutes). There's some delightful dancing and drumming in the last third. Note that the white dress is ceremonial.

From the description at YouTube:
Lucumi is ten and lives in Havana's black district. Brought up to the beat of drums, he dreams of becoming a great rumbero. With other kids on his block he improvises rumbas on old cans and pots and pans. One Saturday the best of Havana's musicians decide to get together at the "Solar California" to honor the memory of Chano Pozo, otherwise known as "the drum of Cuba". With the rumba beat, Lucumi sings, dances, plays and talks about his life, as if better to express the hardships he's already endured and to have his message heard. On this Saturday he joins up with the great rumberos and wakes up the old spirits of the tumbadora.

* * * * *

Tony Gatlif brings us epic scenes as young rumbero Michael Herrera Duarte (Lucumi) stars in this film with Cuban legends Tata Guines & Pancho Quinto. Beautiful cinematography coupled with great drumming and dancing, you'll love watching this one!

Self-portrait in green & shadow


Quick takes: detect animate vs. inanimate in 250 msec

Back in the 1970s & 1980s David Hays and I hypothesized the existence of perceptual mechanisms that would support a quick determination of whether or not something was alive. We figured such perception would have survival value as knowing whether you're facing an animate being or not could mean life or death in the wild. Well, now we know:

In PsyPost:
UC Berkeley scientists have discovered a visual mechanism they call “ensemble lifelikeness perception,” which determines how we perceive groups of objects and people in real and virtual or artificial worlds.

“This unique visual mechanism allows us to perceive what’s really alive and what’s simulated in just 250 milliseconds,” said study lead author Allison Yamanashi Leib, a postdoctoral scholar in psychology at UC Berkeley. “It also guides us to determine the overall level of activity in a scene.”

Vision scientists have long assumed that humans need to carefully consider multiple details before they can judge if a person or object is lifelike.

“But our study shows that participants made animacy decisions without conscious deliberation, and that they agreed on what was lifelike and what was not,” said study senior author David Whitney, a UC Berkeley psychology professor. “It is surprising that, even without talking about it or deliberating about it together, we immediately share in our impressions of lifelikeness.” [...]

Moreover, if we did not possess the ability to speedily determine lifelikeness, our world would be very confusing, with every person, animal or object we see appearing to be equally alive, Whitney said.
* * * * *

Fast ensemble representations for abstract visual impressions

Allison Yamanashi Leib, Anna Kosovicheva & David Whitney
Nature Communications 7, Article number: 13186 (2016) doi:10.1038/ncomms13186
Published online: 16 Nov. 2016


Much of the richness of perception is conveyed by implicit, rather than image or feature-level, information. The perception of animacy or lifelikeness of objects, for example, cannot be predicted from image level properties alone. Instead, perceiving lifelikeness seems to be an inferential process and one might expect it to be cognitively demanding and serial rather than fast and automatic. If perceptual mechanisms exist to represent lifelikeness, then observers should be able to perceive this information quickly and reliably, and should be able to perceive the lifelikeness of crowds of objects. Here, we report that observers are highly sensitive to the lifelikeness of random objects and even groups of objects. Observers’ percepts of crowd lifelikeness are well predicted by independent observers’ lifelikeness judgements of the individual objects comprising that crowd. We demonstrate that visual impressions of abstract dimensions can be achieved with summary statistical representations, which underlie our rich perceptual experience.

* * * * *

From the conclusion:

Our findings reveal that ensemble perception of lifelikeness is achieved extremely rapidly. While previous work has shown that observers categorize stimuli in a brief time period (for example, animal or non- animal34,35), our study shows that observers can perceive relative lifelikeness (that is, whether one stimulus is more life-like than another) on a similarly rapid timescale for groups as well. These results parallel the rapid time scale reported in previous ensemble coding experiments using stimuli with explicit physical dimensions24,26, highlighting the remarkable efficiency of ensemble representations that support abstract visual impressions.

Our findings suggest that lifelikeness is an explicitly coded perceptual dimension that is continuous as opposed to dichot- omous. One prior study has investigated whether animacy is a strictly dichotomous representation, or whether animacy is represented as a continuum36. While this prior study focused on single repeated stimuli shown for longer exposure durations, our findings extend this question to groups of heterogeneous objects that were briefly presented. Our participants extracted a graded ensemble percept of group lifelikeness. Because of the rapid timescale, the judgements of lifelikeness in our experiment would not allow for cognitive reasoning or social processes. Consistent with this, explicit memory of the objects in the sets was not sufficient to account for the number of objects integrated into the ensemble percept. Our results suggest that graded representations of object and crowd lifelikeness emerge as a basic, shared visual percept, available during rudimentary and rapid visual analysis of scenes.

Animacy, as a general construct and topic of cognition research, is extremely complex. Numerous contextual, cognitive and social mechanisms come into play when determining whether an object exhibits animate qualities. Specifically, when making judgements about animacy, theory of mind37–39, contextual cues40,41 and cognitive strategies42 contribute significantly to animacy evaluations. These complexities help explain why there are relatively few agreed-upon operational definitions of animacy or lifelikeness.

In contrast to the ambiguity of the terms animacy or lifelikeness, our results show that the ensemble perception of lifelikeness in groups of static objects was surprisingly consistent across observers. When stimuli were presented for brief durations, observers reached a remarkable consensus on the average lifelikeness—even regarding objects that exhibit seemingly ambiguous qualities. This consistency suggests that a similar percept of lifelikeness is commonly available to observers who glance at a scene. Numerous cognitive and social mechanisms may come online later, and observers may refine their percepts of lifelikeness when given longer periods to evaluate items and context. However, in a first-glance impression of the environment, observers share a relatively unified, consistent percept of lifelikeness.

Change in national mood shows up in patterns of word usage observed in historical databases

In the wake of the election, it’s clear American society is fractured. Negative emotions are running amok, and countless words of anger and frustration have been spilled. If you were to analyze this news outlet for the ratio of positive emotional words to negative ones, would you find a dip linked to the events of the past few weeks?

It’s possible, suggests a study published last week in Proceedings of the National Academy of Sciences. Analyzing Google Books and The New York Times’s archives from the last 200 years, the researchers examined a curious phenomenon known as “positive linguistic bias,” which refers to people’s tendency to use more positive words than negative words. Though the bias is robust — and found consistently across cultures and languages — social scientists are at odds about what causes it.

In this study, the authors shed light on some possible new patterns behind the effect. Across two centuries’ of texts, they found that people’s preference for positive words varied with national mood, and declined during times of war and economic hardship.
* * * * *

Linguistic positivity in historical texts reflects dynamic environmental and psychological factors


      For nearly 50 y social scientists have observed that across cultures and languages people use more positive words than negative words, a phenomenon referred to as “linguistic positivity bias” (LPB). Although scientists have proposed multiple explanations for this phenomenon—explanations that hinge on mechanisms ranging from cognitive biases to environmental factors—no consensus on the origins of LPB has been reached. In this research, we derive and test, via natural language processing and data aggregation, divergent predictions from dominant explanations of LPB by examining it across time. We find that LPB varies across time and therefore cannot be explained simply as the product of cognitive biases and, further, that these variations correspond to fluctuations in objective circumstances and subjective mood.


      People use more positive words than negative words. Referred to as “linguistic positivity bias” (LPB), this effect has been found across cultures and languages, prompting the conclusion that it is a panhuman tendency. However, although multiple competing explanations of LPB have been proposed, there is still no consensus on what mechanism(s) generate LPB or even on whether it is driven primarily by universal cognitive features or by environmental factors. In this work we propose that LPB has remained unresolved because previous research has neglected an essential dimension of language: time. In four studies conducted with two independent, time-stamped text corpora (Google books Ngrams and the New York Times), we found that LPB in American English has decreased during the last two centuries. We also observed dynamic fluctuations in LPB that were predicted by changes in objective environment, i.e., war and economic hardships, and by changes in national subjective happiness. In addition to providing evidence that LPB is a dynamic phenomenon, these results suggest that cognitive mechanisms alone cannot account for the observed dynamic fluctuations in LPB. At the least, LPB likely arises from multiple interacting mechanisms involving subjective, objective, and societal factors. In addition to having theoretical significance, our results demonstrate the value of newly available data sources in addressing long-standing scientific questions.

      PNAS November 21, 2016

      Monday, November 28, 2016

      Wires over Hoboken


      AI Panics (When will they learn?) – A post at Language Log

      The last month or so has seen renewed discussion of the benefits and dangers of artificial intelligence, sparked by Stephen Hawking's speech at the opening of the Leverhulme Centre for the Future of Intelligence at Cambridge University. In that context, it may be worthwhile to point again to the earliest explicit and credible AI warning that I know of, namely Norbert Wiener's 1950 book The Human Use of Human Beings [...]:
      [T]he machine plays no favorites between manual labor and white-collar labor. Thus the possible fields into which the new industrial revolution is likely to penetrate are very extensive, and include all labor performing judgments of a low level, in much the same way as the displaced labor of the earlier industrial revolution included every aspect of human power. […]

      The introduction of the new devices and the dates at which they are to be expected are, of course, largely economic matters, on which I am not an expert. Short of any violent political changes or another great war, I should give a rough estimate that it will take the new tools ten to twenty years to come into their own. […]
      Liberman goes on to offer an old sorta' prognostication of his own (more or a cautionary note) and quotes more of Wiener's book. His point in quoting Wiener, which he makes explicit in a reply to a comment by Victor Mair, is that Wiener's time scale was way off:
      Wiener seriously underestimated the difficulty of pattern recognition, of robotic control for complex mechanisms, and of integrating the two. Considerable progress has been made in those areas but there are still unsolved problems. He also underestimated the difficulty of based speech recognition and text analysis.

      In my opinion, current prognosticators tend to similarly underestimate the difficulty of human-like communicative interaction. It's relatively easy to give the impression of solving the problem (Eliza, Siri) without really even trying to solve it.
      Thus Siri has no understanding of questions put to it or of the answers it provides, even if the answers are good ones. But there is powerful technology behind Siri, powerful in a way that could scarcely have been imagined in Wiener's time. 

      I've appended a comment I made to Liberman's post.

      * * * * *

      Back in the mid-1970s I was studying computational semantics with David Hays. Every now and then I would ask him, When do you think we'll be able to do X? where X ranged over various interesting things one might want of linguistic computing. He always refused to answer, asserting that these things are deeply unpredictable. Remember, was in the the first generation of researchers into machine translation and he'd been on the committee that wrote the ALPAC report. He had practical experience in such things.

      In 1975 he got invited to review the computational linguistics literature for the journal, Computers and the Humanities. He asked me to draft the text (as I'd been reviewing the literature for the American Journal of Computational Linguistics). I did so and included a bit about an article about computational semantics I was publishing in MLN (Modern Language Notes), as it spoke directly to humanist concerns and included an analysis of a Shakespeare sonnet. We then floated, as a thought experiment, the idea of a computational system capable of reading a Shakespeare play, in some interesting, but unspecified, sense of the the word 'reading.' We called it Prospero and set no date on when Prospero would be operational, but in my mind I figured we'd have it in 20 years or so.

      Well, the article appeared in 1976 ("Computational Linguistics and the Humanist"). Add 20 to that and we have 1996. Was anything like Prospero available then? No. Not only that, but the symbolic computing that was at the center of our review, and of Prospero, was being pushed into the background by statistical methods. It's now 2016, 40 years after that paper. We don't have anything like Prospero – though I believe Patrick Henry Winston is using the Macbeth story (but not Shakespeare's play) in an investigation of story comprehension – now and I see no prospects for Prospero in the near future. And yet, by the practical standards of 1976 Siri, as is Google's translation tech, and self-driving vehicles. Etc.

      It's a brave new world that has such machines in it, and most of it is still unexplored.

      * * * * *

      I've been entertaining the idea that, in some ways, we're on the edge of the Marvelous Future. No, we're not flying around in jet packs; getting humans to low-earth orbit is not as routine as Kubrick depicted in 2001; the computational marvels of the Star Trek computer are still in the unforeseeable future, not to mention Cmdr Data; and environmental catastrophe seems to be closing in on us. But we're living in a very different world from that of 1950 and confront very different possibilities. Technology is at the center of it. Now we have to accommodate our thinking about society to fit the very different world before us. We need to think about universal basic income. Among other things.

      I just watched conversation in which economist Glenn Loury (of Brown) cited Dani Rodrik to the effect that, given globalization, the national sovereignty, and democracy, you can have any two of the three, but not all three.

      Saturday, November 26, 2016

      Protect the vulnerable: Identity politics is here to stay

      Michelle Goldberg in Slate; some definitions:
      Identity politics and political correctness aren’t the same thing, but they are interrelated. One situates political claims in a person’s racial and sexual status. The other tries to force a surface consensus on racial and sexual equality through taboos and speech codes.
      Guilt-mongering is counter productive:
      The spasms of unchained bigotry we’ve seen post-election suggest that some Trump supporters were simply longing to howl NIGGER! KIKE! CUNT! FAGGOT! Among those I spoke to, however, some felt bullied for violating more arcane speech rules they neither assented to nor understood. Social media had forced them to submit to an alien set of norms; Trump liberated them. The late cultural critic Ellen Willis might have seen this coming. “Coercion and guilt-mongering—the symbiotic weapons of authoritarian culture—inevitably provoke resistance; when the left uses these tactics it merely encourages people to confuse their most oppressive impulses with their need to be themselves, offensively honest instead of hypocritically nice,” she wrote in a 1992 essay aptly titled “Identity Crisis.” “Perversely, racism and sexism become badges of freedom rather than stigmata of repression, while the roots of domination in people’s rage and misery remain untouched.”
      The political advantages of fascist culture:
      Trump offers his followers the fascist bargain that Walter Benjamin described in the epilogue to The Work of Art in the Age of Mechanical Reproduction. “Fascism attempts to organize the newly created proletarian masses without affecting the property structure which the masses strive to eliminate,” he wrote. “Fascism sees its salvation in giving these masses not their right, but instead a chance to express themselves.” Benjamin, a Marxist, treated this as an example of false consciousness. Perhaps, however, we should pay Trump voters the courtesy of assuming that at least some of them knew what they were doing when they opted for the politics of cultural revenge delivered by a billionaire in a gold-plated airplane. The question, then, is what those of us who are the objects of this revenge should do now.
      Going forward:
      Certainly, Democrats should champion the interests of working people. They should struggle to expand the social safety net and defend the labor movement against conservative attempts to destroy it. They should work to preserve the gains of the Affordable Care Act, even for those Trump supporters who just voted to gut their own health care. But there can be no going back on defending the tenuous gains of women and people of color, or foregrounding their demands for full equality. They are the base of the party, the people who gave Hillary Clinton a popular vote majority but will now be ruled by a hostile minority.