This is the third most popular photo over the past half day or so (the top two were in my most recent Friday Fotos):
I'd almost forgotten about this one (from a set of discarded toys):
The general idea is that meaning is always negotiated and that experimental replication is an aspect of the negotiations.Some of the reasons for the problems are well known. There's the "file drawer effect", where you try many experiments and only publish the ones that produce the results you want. There's p-hacking, data-dredging, model-shopping, etc., where you torture the data until it yields a "statistically significant" result of an agreeable kind. There are mistakes in data analysis, often simple ones like using the wrong set of column labels. (And there are less innocent problems in data analysis, like those described in this article about cancer research, where some practices amount essentially to fraud, such as performing cross-validation while removing examples that don't fit the prediction.) There are uncontrolled covariates — at the workshop, we heard anecdotes about effects that depend on humidity, on the gender of experimenters, and on whether animal cages are lined with cedar or pine shavings. There's a famous case in psycholinguistics where the difference between egocentric and geocentric coordinate choice depends on whether the experimental environment has salient asymmetries in visual landmarks (Peggy Li and Lila Gleitman, "Turning the tables: language and spatial reasoning", Cognition 2002).
The field of AI includes both neat and scruffy approaches. A neat system for MT would be a faithful implementation of some linguistic theory. Current leading MT systems are somewhat scruffy. They contain various hacks and shortcuts that help to produce a decent translation quickly.Researchers with a scruffy-AI mindset may think that's just fine. Either they suspect that brains themselves are much scruffier than linguists admit, or they have no opinion about brains and simply want to engineer a working product.A scruffy-AI researcher may want to enrich the current system to make more use of syntax, but will be perfectly happy to use a "big hairy four-by-four" approximation of syntax that is nailed onto the rest of the system with railroad spikes. The goal is to improve the end results by any expedient method.Other researchers working on the same system may be true believers in neat AI. They really wish that the system had been designed on clean linguistic and statistical principles from the ground up. Unfortunately such systems would be hard to build and have not worked as well in the past, so these neat-AI researchers settle for helping to nail syntax onto an existing scruffy system. They feel proud of themselves for using (more) linguistics. But does this route really lead toward the utopian system they dream of? Can the hybrid system be gradually made more principled, as the old hacks are gradually phased out? Or is that just a comforting fantasy that sustains them, as it sustains Barthelme's construction workers? "The exercise of our skills, and the promise of the city, were enough."
Abstract: The Greatest Man in Siam is a Walter Lantz cartoon from 1943. It has a pseudo-Oriental setting and depicts a contest to win the hand of a young princess. The losers present themselves as intelligent, rich, and athletic, respectively, while the winner is a good musician and dancer. He’s also the only one who plays attention to the princess and doesn’t insult the king. The cartoon ends with everyone dancing, thus affirming communal values over individual accomplishment. Just before the end there is a virtuoso dance sequence between the couple; it was superbly animated by Pat Matthews.Contents:Introduction: What Fun! Learning to See 1The Hottest Man in Siam 4The Greatest Social Contract in Siam 18Why Siam? The Contest Motif 32The Phallus in the Palace 34Eyes, Electricity, and a Contest 38In Praise of Cartoons: Lantz Does Conceptual Integration 45The Siam Paradox 54Shamus Culhane of the Avant-Garde 55
All technology arises out of specific social circumstances. In our time, as in previous generations, cameras and the mechanical tools of photography have rarely made it easy to photograph black skin. The dynamic range of film emulsions, for example, were generally calibrated for white skin and had limited sensitivity to brown, red or yellow skin tones. Light meters had similar limitations, with a tendency to underexpose dark skin. And for many years, beginning in the mid-1940s, the smaller film-developing units manufactured by Kodak came with Shirley cards, so-named after the white model who was featured on them and whose whiteness was marked on the cards as “normal.” Some of these instruments improved with time. In the age of digital photography, for instance, Shirley cards are hardly used anymore. But even now, there are reminders that photographic technology is neither value-free nor ethnically neutral. In 2009, the face-recognition technology on HP webcams had difficulty recognizing black faces, suggesting, again, that the process of calibration had favored lighter skin.
An artist tries to elicit from unfriendly tools the best they can manage. A black photographer of black skin can adjust his or her light meters; or make the necessary exposure compensations while shooting; or correct the image at the printing stage. These small adjustments would have been necessary for most photographers who worked with black subjects, from James Van Der Zee at the beginning of the century to DeCarava’s best-known contemporary, Gordon Parks, who was on the staff of Life magazine....
DeCarava, on the other hand, insisted on finding a way into the inner life of his scenes. He worked without assistants and did his own developing, and almost all his work bore the mark of his idiosyncrasies. The chiaroscuro effects came from technical choices: a combination of underexposure, darkroom virtuosity and occasionally printing on soft paper. And yet there’s also a sense that he gave the pictures what they wanted, instead of imposing an agenda on them.
This post is an elaboration I made on a comment in my Academia.edu session on my open letter to Steven Pinker.
The discipline of English studies was certainly well in place when I entered graduate school in 1948 and when I began full-time teaching in 1952. In those days we knew what we were doing. All sorts of disciplinary rules, boundaries, and taken-for-granted assumptions were firmly in place. We knew what the canon was, what were the main periods of English literary history, and what constituted good scholarship in the field.... In those days “we” were mostly men, all men in the English department at Hopkins, and all the works we studied, with some exceptions, were by men. American literature was pretty marginal. It all made perfect sense.