A banana explores the poweful accomplishment that the Five Factor Model of personality represents for social science https://t.co/juBoY4wMRd— Poweful Banana (@literalbanana) July 4, 2020
See my post, Measurement: IQ and 5 Personality Factors.
A banana explores the poweful accomplishment that the Five Factor Model of personality represents for social science https://t.co/juBoY4wMRd— Poweful Banana (@literalbanana) July 4, 2020
If I were starting grad school now I would not do a PhD in ML. It's not the competition that would steer me away though, I think it's a bad idea for other reasons. https://t.co/Y551gtaXE9— Misha Denil (@notmisha) July 4, 2020
The field has no consensus on what the big important challenges are. Today is better than a couple years ago when we really had no idea, but the most promising leads come in the form of bringing ML to bear on problems of adjacent fields (chemistry, robotics, economics, etc).
The new challenges all share a common theme which separates them from the old challenges. They all require a much greater depth of domain knowledge to understand if you are making progress.
(Its easy to tell if your dog classifier can classify dogs, or if your translation model can translate sentences. It is hard to tell if your small molecule VAE can generate plausible structures, or if your simulated economy can offer insights to policymakers.)
In the meantime, the ML community seems intent on
1. Hiding domain details behind standardized benchmarks
2. Commoditizing its own tools and methods
Neither of these things are intrinsically bad. Imagenet was a powerful benchmark that drove years of progress, and the democratization of ML tools has been a huge boon in countless ways.
But when your most promising path to impact is to bring your methods to neighboring fields it does seem like a strategic error to avoid learning about the details of those fields and to simultaneously make it easy for them to adopt your tools without you.
The way the landscape looks today, it's a lot easier to teach a chemist to use tensorflow than it is to teach an ML-er to do chemistry, and that gap is only going to get larger as the tooling gets better.
The next LSTM or resnet or transformer will probably come from the core ML community, but developments like that are few and far between.
So when people ask me if they should do a PhD in ML I say no, they should do a PhD in something else and also learn tensorflow. I think they're much more likely to do meaningful ML work that way.
Abstract: This is a brief guide to the various papers and books that William Benzon and David Hays have written about the long-term evolution of culture. Much of the work is descriptive in that it characterizes a variety of cultural phenomena at four different cultural Ranks as characterized by the conceptual mechanisms available to the culture but does not attempt to describe the causal process by which these mechanism have evolved in human history. The ranks have numerical designations that align with widely recognized historical epochs: Rank 1, preliterate; Rank 2, literacy; Rank 3, post-Renaissance West, industrialism; and Rank 4, 20th century. Ranges of phenomena considered: basic cognition and thinking, narrative, music, expressive culture, technology, forms of governance and economic organization. The approach is briefly contrasted with work by Robert Wright, Richard Dawkins, Boyd and Richerson, and others.
|Informatics||Emergence, years ago|
Abstract: Widespread concerns about new technologies—whether they be novels, radios, or smartphones—are repeatedly found throughout history. Although tales of past panics are often met with amusement today, current concerns routinely engender large research investments and policy debate. What we learn from studying past technological panics, however, is that these investments are often inefficient and ineffective. What causes technological panics to repeatedly reincarnate? And why does research routinely fail to address them? To answer such questions, I examined the network of political, population, and academic factors driving the Sisyphean cycle of technology panics. In this cycle, psychologists are encouraged to spend time investigating new technologies, and how they affect children and young people, to calm a worried population. Their endeavor, however, is rendered ineffective because of the lack of a theoretical baseline; researchers cannot build on what has been learned researching past technologies of concern. Thus, academic study seemingly restarts for each new technology of interest, which slows down the policy interventions necessary to ensure technologies are benefiting society. In this article, I highlight how the Sisyphean cycle of technology panics stymies psychology’s positive role in steering technologies.H/t Tyler Cowen.
* * * * *
In 1941, Mary Preston published “Children’s Reactions to Movie Horrors and Radio Crime” in The Journal of Pediatrics. The American pediatrician had studied hundreds of 6- to 16-year-old children and concluded that more than half were severely addicted to radio and movie crime dramas, having given themselves “over to a habit-forming practice very difficult to overcome, no matter how the aftereffects are dreaded” (pp. 147–148). Most strikingly, Preston observed that many children consumed these dramas “much as a chronic alcoholic does drink” (p. 167). Preston therefore voiced severe concerns about the children’s health and future outcomes: Children who consumed more radio crime or movie dramas were more nervous and fearful and suffered from worse general health and more disturbed eating and sleep.
To truly understand these claims, one needs to consider Preston’s work in the context of her time. The decade preceding her work saw both broad social and technological changes; the explosive growth in popularity of the household radio during this period, however, is especially striking. In 1922, 6,000 radios were owned by the American public; this number grew to 1.5 million by 1923, 17 million by 1932, and 44 million by 1940 (Dennis, 1998). In 1936, about nine in 10 New York households owned a household radio, and children in these homes spent between 1 and 3 hr a day listening to these devices (Dennis, 1998). This rapid rise in popularity sparked concerns not limited to Mary Preston’s article. A New York Times piece considered whether listening to the radio too much would harm children and lead to illnesses because the body needed “repose” and could not “be kept up at the jazz rate forever” (Ferrari, as cited in Dennis, 1998). Concerns voiced by the Director of the Child Study Association of America noted how radio was worse than any media that came before because “no locks will keep this intruder out, nor can parents shift their children away from it” (Gruenberg, 1935). This view was mirrored in a parenting magazine published at the time:
Here is a device, whose voice is everywhere. . . . We may question the quality of its offering for our children, we may approve or deplore its entertainments and enchantments; but we are powerless to shut it out . . . it comes into our very homes and captures our children before our very eyes. (Frank, as cited in Dennis, 1998)In recent decades, concerns about the effects of radio on young people have practically disappeared—but societal concerns about emergent technologies have definitely not done so.
Given the option, many parents of today would enthusiastically welcome the consumption of radio dramas, especially if they would take the place of their children playing around on their phones or chatting to friends on social media. Just as was the case with the radio, academic publications and other reports now routinely liken these new digital pursuits to drug use (Royal Society of Public Health, 2017; see commentary, Przybylski & Orben, 2017). They once again raise the specter of vast proportions of the adolescent population becoming addicted to a new technology (Murali & George, 2007) and that this will have diverse and far-reaching negative consequences (Greenfield, 2014; see commentary, Bell, Bishop, & Przybylski, 2015). Although previous parents’ fears of radio addiction might seem amusing now, contemporary concerns about smartphones, online games, and social media are shaping and influencing policy around the world (Choi, Cho, Lee, Kim, & Park, 2018; Davies, Atherton, Calderwood, & McBride, 2019; Department for Digital, Culture, Media and Sport & Secretary of State for the Home Department, 2019; House of Commons Science and Technology Select Committee, 2019; Viner, Davie, & Firth, 2019; Wait Until 8th, 2018). These technology panics—times in which the general population is gripped by intense worry and concern about a certain technology—are influential and reoccurring. Current worries about new technologies are surprisingly similar to concerns about technologies that have preoccupied parents and policymakers in the past but are met with amusement today.
The similarity between concerns about the radio and social media provides a striking reminder that in every decade, new technologies enter human lives and that in their wake there will arrive widespread concerns about their effects on the most vulnerable in society. Technological advances and the concerns they engender form part of a constant cycle. Nearly identical questions are raised about any new technology that reaches the spotlight of scientific and public attention. These are then addressed by scientists, public commentators, and policymakers until a newer form of technology inspires the cycle of concern to restart. Understanding how these different spheres of academia, policy, and the public interplay is crucial to understanding how the reaction to new technologies might be improved.
In this article, I argue that people’s reactions to new technologies, and researchers’ approaches to studying them, are best understood through the lens of a comprehensive framework I have named the Sisyphean cycle of technology panics. The framework highlights the diverse actors that interact to cause technology panics to develop in repeated and almost identical cycles and outlines the consequences this has for academic and policy progress. In this article, I first examine technology panics of the past century and then move on to discuss why technology panics routinely evoke concern. I then discuss the role of politics and academia in addressing and magnifying these widespread worries, critically reflecting on the positive and negative influence of the psychological sciences. Finally, I look ahead and touch on what can be done by researchers to ameliorate or address the negative effects of this cycle of technological panics in the face of an increasingly accelerating technological revolution.
Few things have taught me more about the world than keeping a fish tank.
You get some new fish, put them in your fish tank, everything’s fine. You get some more new fish, the next morning one of them has killed almost all the others. Another time you get a new fish and it’s all gaspy and pumping its gills desperately, because it’s from alkeline waters and your tank is too acidic for it. So you put in a little pH adjusting powder and… all the other fish get sick from the Ammonia that releases and die. Another time you get a new fish and it’s sick! So you put fish antibiotics in the water, aaaand… they kill all the symbiotic bacteria in your filter system and the water gets filled with rotting bacteria, and the fish die. Another time you do absolutely nothing, and the fish die.
What’s happening? The same thing that happened in the first two centuries after Francis Bacon, when the science was learning tons, but achieving little that actually improved daily life. The system is more complex than it seems. A change which achieves its intended purpose also throws out-of-whack vital forces you did not realize were connected to it. The acidity buffer in the fish tank increases the nutrients in the water, which causes an algae bloom, which uses up the oxygen and suffocates the catfish. The marriage alliance between Milan and Ferrara makes Venice friends with Milan, which makes Venice’s rival Genoa side with Spain, which makes Spain reluctant to anger Portugal, which makes them agree to a marriage alliance, and then Spain is out of princesses and can’t marry the Prince of Wales, and the next thing you know there are soldiers from Scotland attacking Bologna. A seventeenth-century surgeon realizes that cataracts are caused by something white and opaque appearing at the front of the eye so removes it, not yet understanding that it’s the lens and you really need it.
So when I hear people ask “Has social progress has failed?” or “Has liberalism failed?” or “Has the Civil Rights Movement failed?” my zoomed-in self, my scared self, the self living in this crisis feels afraid and uncertain, but my zoomed-out self, my historian self answers very easily. No. These movements have done wonders, achieved tons! But they have also done what all movements do in a dynamic historical system: they have had large, complicated consequences. They have added something to the fish tank. Because the same Enlightenment impulse to make a better, more rational world, where everyone would have education and equal political empowerment BOTH caused the brutalities of the Belgian Congo AND gave me the vote. And that’s the sort of thing historians look at, all day.
But if the consequences of our actions are completely unpredictable, would it be better to say that change is real but progress controlled by humans is just an idea which turned out to be wrong? No. I say no. Because I gradually got better at understanding the fish tank. Because the doctors gradually figured out how the eye really does function. Because some of our civil rights have come by blood and war, and others have come through negotiation and agreement. Because we as humans are gradually learning more about how our world is interconnected, and how we can take action within that interconnected system. And by doing so we really have achieve some of what Francis Bacon and his followers waited for through those long centuries: we have made the next generation’s experience on this Earth a little better than our own. Not smoothly, and not quickly, but actually. Because, in my mock papal election, the dam did break, but those students who worked hard to dig their channels did direct the flood, and most of them managed to achieve some of what they aimed at, though they always caused some other effects too.
Is it still blowing up in our faces?
Is it going to keep blowing up in our faces, over and over?
Is it going to blow up so much, sometimes, that it doesn’t seem like it’s actually any better?
Is that still progress?
Because there was a baby in the bathwater of Whig history. If we work hard at it, we can find metrics for comparing times and places which don’t privilege particular ideologies. If we work hard at it, we can find metrics for comparing times and places which don’t privilege particular ideologies. Metrics like infant mortality. Metrics like malnutrition. Metrics like the frequency of massacres. We can even find metrics for social progress which don’t irrevocably privilege a particular Western value system. One of my favorite social progress metrics is: “What portion of the population of this society can be murdered by a different portion of the population and have the murderer suffer no meaningful consequences?” The answer, for America in 2017, is not 0%. But it’s also not 90%. That number has gone down, and is now far below the geohistorical norm. That is progress. That, and infant mortality, and the conquest of smallpox. These are genuine improvements to the human condition, of the sort that Bacon and his followers believed
would come if they kept working to learn the causes and secret motions of things. And they were right. While Whig history privileges a very narrow set of values, metrics which track things like infant mortality, or murder with impunity, still privilege particular values — life, justice, equality — but aim to be compatible with as many different cultures, and even time periods, as possible. They are metrics which stranded time travelers would find it fairly easy to explain, no matter where they were dumped in Earth’s broad timeline. At least that’s our aim. And such metrics are the best tool we have at present to make the comparisons, and have the discussions about progress, that we need to have to grapple with our changing world.
Because progress is both a concept and a phenomenon.
The concept is the hope that collective human effort can make every generation’s experience on this Earth a little better than the previous generation’s. That concept has itself become a mighty force shaping the human experience, like communism, iron, or the wheel. It is valuable thing to look at the effects that concept has had, to talk about how some have been destructive and others constructive, and to study, from a zoomed-out perspective, the consequences, successes, and failures of different movements or individuals who have acted in the name of progress.
The phenomenon is also real. My own personal assessment of it is just that, a personal assessment, with no authority beyond some years spent studying history. I hope to keep reexamining and improving this assessment all the days of my life. But here at the beginning of 2017 I would say this:
Progress is not inevitable, but it is happening.
It is not transparent, but it is visible.
It is not safe, but it is beneficial.
It is not linear, but it is directional.
It is not controllable, but it is us. In fact, it is nothing but us.
Progress is also natural, in my view, not in the sense that it will inevitably triumph over its doomed opposition, but in the sense that the human animal is part of nature, so the Declaration of the Rights of Man is as natural as a bird’s nest or a beaver dam.
I would agree with him [Joel Mokyr] that after 1900 science really starts to matter. Artificial fertilizers, for example, are terribly important. The Green Revolution, to take another similar case, made India into a net grain exporter in really a very few years, in about 10 years or so. So science matters a lot. We couldn't have what we're on right now without science, I agree. But before that it's mainly technology. And the problem is if people put it all into one word and they say it fast. They say science-and-technology. So you get the idea that science is really what it is. Technology is just what these silly bourgeoisie were doing making it pay off. And that's not right.
Everything around you, look around your house. It's designed, for beauty and for profit, and everything you see around you is. And that has very little to do with science on the whole. So science is nice. ... I regard myself as a scientist, but it's the craftsperson, the engineer, applying the science.
Science is about analyzing and describing to arrive at theories and models of how things work. The end result of a course of scientific work is an account of how some phenomenon can be explained within a given framework of laws and models. Such frameworks are likely to be elegant and compact. Newton, for example, had three laws of motion, not 57.
Engineering is quite different. Engineers use laws and models to analyze situations so that they can design a device to perform a certain task. The output of a course of engineering work is the description of that device and plans for its construction. To have any value those plans must specify something that can be constructed with known materials using known methods.
Thus when I was on the faculty at the Renssalaer Polytechnic Institute I learned that the engineering curriculum had a design stem, a series of courses required of all engineers devoted specifically to design. That is, it was not assumed that engineering graduates would somehow magically figure out how to design buildable stuff once they’d graduated and taken jobs in the “real” world. They were taught, given practice in, designing and building things.
Science isn’t like that. Scientists may design experiments, but that’s mostly a matter of logic, not of constructing something piece by piece by piece, and so forth, for 10s, 100s, or 1000s or more pieces. And yes, scientists may construct apparatus. To the extent they are doing that, they are acting as engineers. For that matter, engineers will make observations and conduct tests as part of their design work. And so they will, on occasion, act as scientists. But the overall objectives and methods, the envelope, if you will, of a scientific enterprise is different from that of an engineering enterprise.
Bourgeois Equality: How Ideas, Not Capital or Institutions, Enriched the World. Vol. 3 of the trilogy “The Bourgeois Era,” University of Chicago Press, 2016, 787 + xlii pp.
The book explores the reputational rise of the bourgeoisie, that is, a Bourgeois Revaluation overtaking Holland and then Britain from Shakespeare’s time to Adam Smith. It made the modern world, by giving a reason for ordinary people to innovate. The material changes—empire, trade—were shown in Bourgeois Dignity (2010) to be wholly inadequate to explain the explosion of incomes 1800 to the present. What pushed the world into frenetic innovation were the slowly changing ideas 1600–1848 about the urban middle class and about their material and institutional innovations. A class long scorned by barons and bishops, and regulated into stagnation by its very own guilds and city councils and state-sponsored monopolies, came to be treasured—at least by the standard of earlier, implacable scorn—from 1600 to the present, first in Holland and then in Britain and then the wider world. And when the Amsterdamers after 1600 or so, and the Londoners and Bostonians after 1700 or so, commenced innovating, more people commenced admiring them. The new valuation of the bourgeoisie, a new dignity and liberty for ordinary people was a change peculiar to northwestern Europe in how people applied to economic behavior the seven old words of virtue—prudence, justice, courage, temperance, faith, hope, and love. With more or less good grace the people around the North Sea began to accept the outcome of trade-tested betterment. Then people did so in Europe generally and its offshoots, and finally in our own day in China and India. Most came for the first time to regard creative destruction as just, and were courageous about responding to it, and hopeful in promoting it. Most people, with the exception of the angry clerisy of artists and intellectuals (and even them only after 1848), stopped hating the bourgeoisie as much as their ancestors had for so very long before. Many started loving it. In consequence during a century or two the northwest Europeans became shockingly richer in goods and in spirit. That is, not economics but “humanomics” explains our riches.
University of Chicago Press page for the book.
GPT3 writing code. A compiler from natural language to code.— Flo Crivello (@Altimor) July 2, 2020
People don't understand — this will change absolutely everything. We're decoupling human horsepower from code production. The intellectual equivalent of the discovery of the engine. https://t.co/QGJbQRBdQv pic.twitter.com/CJIaRK8j0M
Remember, "there is no fire alarm for artificial general intelligence." When the singularity does come, it will start with seemingly mundane feats (if you can even call this that). Now really does feel like the beginning of that. https://t.co/PyILwAabp0— Flo Crivello (@Altimor) July 2, 2020
From "it's a best practice to start by defining your API and writing a README" to "defining your API and writing a README is all you need to do"— Flo Crivello (@Altimor) July 2, 2020
This model itself: no implications. And I don’t see autonomy over the horizon either. But 10 years out, I could see neural models assisting writing/coding, making *very* skilled workers more productive, but reducing entry-level opportunities.— Ted Underwood (@Ted_Underwood) July 3, 2020
Brooks' observations are based on his experiences at IBM while managing the development of OS/360. He had added more programmers to a project falling behind schedule, a decision that he would later conclude had, counter-intuitively, delayed the project even further. He also made the mistake of asserting that one project—involved in writing an ALGOL compiler—would require six months, regardless of the number of workers involved (it required longer). The tendency for managers to repeat such errors in project development led Brooks to quip that his book is called "The Bible of Software Engineering", because "everybody quotes it, some people read it, and a few people go by it". The book is widely regarded as a classic on the human elements of software engineering.
We're still just so bad at building software. Our software sucks, our tools for building software suck, and, maybe for some deep cognitive reasons or something we can't do very synthetic better than we currently are, I hope that's not true and my strong supposition is that it's not true, and so the sense that we're somehow kind of plateauing, we're maximizing in depth as possible, it just feels misguided to me. It stills kills me that we're building, you know, MS-DOS programs and there are so many many layers beyond that which we're currently realizing.
In the United States, 75% of venture capital goes to software. Some 5 to 10% goes to biotech. The other sliver goes to everything else—transportation, sanitation, health care. No wonder the pandemic has exposed venture capital’s broader failures. https://t.co/N8FaQjJXGm— MIT Technology Review (@techreview) July 1, 2020
Piano retailer boasts: “It’s actually been the best three months that I’ve seen in retail.” https://t.co/FXCO4KaE6G People in lockdown have apparently rediscovered the joys of home music-making.— Ted Gioia (@tedgioia) July 1, 2020
When the coronavirus sequestered Americans at home and forced businesses to close, Hale Ryan braced himself for a financial winter. As the director of sales and marketing at Metroplex Piano in Dallas and a 30-year veteran of the piano business, he had seen other crises — like 9/11 and the 2008 recession — damage sales. When the lockdown began in March, Mr. Ryan said in a recent phone interview, “I thought this was going to be the final nail.”
Instead, he began to field a flood of requests for instruments. Even with his showroom closed, the economy nose-diving and the professional music world in tatters, he sold pianos.
“It’s actually been the best three months that I’ve seen in retail,” he said.
The piano market encompasses a wide range of instruments, from hand-built concert grands that cost hundreds of thousands of dollars to factory-made uprights, digital pianos and keyboards designed for young learners. The high-water mark of piano sales in America was 1909, when 364,500 new acoustic pianos were sold in the country. Since then, radio, television, recordings and instrument technology transformed the way music is created and consumed. Only about 30,000 new acoustic pianos are now sold here each year, but the number surpasses a million when all digital varieties are included. [...]
And yet interviews with nearly a dozen dealers across the country reveal surprisingly robust sales that suggest a resurgence of at-home music-making just as the live concert scene vanished. Most of the dealers noted a rise in demand for digital pianos, which allow players to channel the sound through headphones: a key feature in households where working-from-home parents share space with distance-learning children. The phenomenon seems to be part of a general pivot toward home-based recreation, along with increased demand for gym equipment and bicycles.
Indeed, a significant portion of purchasers appears to be new to the market. Tom Sumner, the president of Yamaha Corporation of America, said in an interview that he had heard from retailers that between 20 and 25 percent of sales this spring were to first-time buyers.
EARLY CHILDHOOD IN CHINA
Born in 1920 (Chinese year of the Monkey) near Shanghai, Cecilia was raised in Beijing (which before Mao was called Peking) in a wealthy family of twelve children (nine daughters and three sons). As a child, Cecilia was not allowed in the kitchen, as two cooks prepared Shanghai-style and Northern Mandarin-style cuisine for the family. She learned about food at the dinner table, where each dish in elaborate, multi-course meals was discussed and critiqued.
Cecilia’s privileged life came to an end in 1942, when she and a sister fled the Japanese occupation with an arduous thousand-mile, six-month trek (on foot!) from Beijing to Chongqing. She resettled in Shanghai, where, as a young woman, she met her husband and raised her children, May and Philip. The family enjoyed the sophisticated and dynamic Shanghai life when the city was booming. However, that all came to an end in 1949, when she fled from China to Japan during the Communist Revolution.
STARTING A BUSINESS IN AMERICA
In 1959, Cecilia traveled to San Francisco to visit her recently widowed sister for what was meant to be a brief stay. She stayed and in l961, through a series of chance encounters, opened a Chinese restaurant on Polk Street that she named the Mandarin.
At this 65-seat “hole in the wall,” she introduced the American palate to authentic Northern Chinese cuisine from cities such as Shanghai and Beijing and the provinces of Sichuan and Hunan. Her menus were starkly different from the Americanized dishes that populated Chinese restaurants at the time, such as chop suey, chow mein, and egg foo young.
Looking back today, Cecilia says, “Maybe I was naïve about venturing into entrepreneurship in a new country, as an immigrant and in an industry dominated by men.” In her first restaurant, she wore many hats: hostess, reservationist, food procurer, waiter—even busboy! Granddaughter Siena Chiang credits her grandmother’s success to grit, luck, and “an uncanny sense for good food.”