Sunday, July 5, 2020

So much for the Five Factor model of human personality


See my post, Measurement: IQ and 5 Personality Factors.

From the old neighborhood



"If I were starting grad school" – One person's view about the state of machine learning (ML)


The rest of Denil's tweet stream:
The field has no consensus on what the big important challenges are. Today is better than a couple years ago when we really had no idea, but the most promising leads come in the form of bringing ML to bear on problems of adjacent fields (chemistry, robotics, economics, etc).
The new challenges all share a common theme which separates them from the old challenges. They all require a much greater depth of domain knowledge to understand if you are making progress.

(Its easy to tell if your dog classifier can classify dogs, or if your translation model can translate sentences. It is hard to tell if your small molecule VAE can generate plausible structures, or if your simulated economy can offer insights to policymakers.)

In the meantime, the ML community seems intent on
1. Hiding domain details behind standardized benchmarks
2. Commoditizing its own tools and methods

Neither of these things are intrinsically bad. Imagenet was a powerful benchmark that drove years of progress, and the democratization of ML tools has been a huge boon in countless ways.

But when your most promising path to impact is to bring your methods to neighboring fields it does seem like a strategic error to avoid learning about the details of those fields and to simultaneously make it easy for them to adopt your tools without you.

The way the landscape looks today, it's a lot easier to teach a chemist to use tensorflow than it is to teach an ML-er to do chemistry, and that gap is only going to get larger as the tooling gets better.

The next LSTM or resnet or transformer will probably come from the core ML community, but developments like that are few and far between.

So when people ask me if they should do a PhD in ML I say no, they should do a PhD in something else and also learn tensorflow. I think they're much more likely to do meaningful ML work that way.

Saturday, July 4, 2020

Mind-Culture Coevolution: Major Transitions in the Development of Human Culture and Society


I've now turned this into a PDF file which you can download here: https://www.academia.edu/37815917/Mind-Culture_Coevolution_Major_Transitions_in_the_Development_of_Human_Culture_and_Society

* * * * *
Abstract: This is a brief guide to the various papers and books that William Benzon and David Hays have written about the long-term evolution of culture. Much of the work is descriptive in that it characterizes a variety of cultural phenomena at four different cultural Ranks as characterized by the conceptual mechanisms available to the culture but does not attempt to describe the causal process by which these mechanism have evolved in human history. The ranks have numerical designations that align with widely recognized historical epochs: Rank 1, preliterate; Rank 2, literacy; Rank 3, post-Renaissance West, industrialism; and Rank 4, 20th century. Ranges of phenomena considered: basic cognition and thinking, narrative, music, expressive culture, technology, forms of governance and economic organization. The approach is briefly contrasted with work by Robert Wright, Richard Dawkins, Boyd and Richerson, and others.
Mind and Culture

A central phenomenon of the human presence on earth is that, over the long term, we have gained ever more capacity to understand and manipulate the physical world and, though some would debate this, the human worlds of psyche and society. The major purpose of the theory which the late David Hays and I have developed (and which I continue to develop) is to understand the mental structures and processes underlying that increased capacity. While more conventional students of history and of cultural evolution have much to say about what happened and when and what was influenced by what else, few have much to say about the conceptual and affective mechanisms in which these increased capacities are embedded. That is the story we have been endeavoring to tell. [1]

Our theory is thus about processes in the human mind. Those processes evolve in tandem with culture. They require culture for their support while they enable culture through their capacities. In particular, we believe that the genetic elements of culture are to be found in the external world, in the properties of artifacts and behaviors, not inside human heads. Hays first articulated this idea in his book on the evolution of technology and I have developed it in my papers Culture as an Evolutionary Arena, Culture's Evolutionary Landscape, in my book on music, Beethoven's Anvil: Music in Mind and Culture, and in various posts at New Savanna and one for the National Humanities Center which I have aggregated into four working papers:
This puts our work at odds with some students of cultural evolution, especially those who identify with memetics, who tend to think of culture's genetic elements as residing in nervous systems.

We have aspired to a system of thought in which the mechanisms of mind and feeling have discernible form and specificity rather than being the airy nothings of philosophical wish and theological hope. We would be happy to see computer simulations of the mechanisms we've been proposing. Unfortunately neither the computational art nor our thinking have been up to this task. But that, together with the neuropsychologist's workbench, is the arena in which these matters must eventually find representation investigation, and a long way down the line, resolution. The point is that, however vague our ideas about mechanisms currently may be, it is our conviction that the phenomenon under investigation, culture and its implementation in the human brain, is not vague and formless, nor is it, any more, beyond our ken.

For a glossary of terms, see the page Cultural Evolution Terms.

Major Transitions

The story we tell is one of cultural paradigms existing at four levels of sophistication, which we call ranks. In the terminology of current evolutionary biology, these ranks represent major transitions in cultural life. Rank 1 paradigms emerged when the first humans appeared on the savannas of Africa speaking language as we currently know it. Those paradigms structured the lives of primitive which societies emerged perhaps 50,000 to 100,000 years ago. Around 5,000 to 10,000 years ago Rank 2 paradigms emerged in relatively large stable human societies with people subsisting on systematic agriculture, living in walled cities and reading written texts. Rank 3 paradigms first emerged in Europe during the Renaissance and gave European cultures the capacity to dominate, in a sense, to create, world history over the last 500 years. This century has begun to see the emergence of Rank 4 paradigms.

Crudely considered, it appears that transitions from one rank to the text are decreasing by an order of magnitude from one rank to the next:

Informatics Emergence, years ago
Rank 1 Speech 50,000
Rank 2 Writing 5,000
Rank 3 Computation 500
Rank 4 Computing 50

One does not have to look at those numbers for very long before wondering just what started emerging five years ago. While there is nothing in our theory that forbids the emergence of a fifth, or a sixth rank, and so on, it doesn’t seem plausible that the time between ranks can continue to diminish by an order of magnitude. The emergence of a new system of thought, after all, does not appear by magic. People have to think it into existence: How much time and effort is required to transcend the system of thought in which a person was raised? THAT limits just how fast new systems of thought can arise.

Nothing in our theory limits the number of cultural ranks. Each rank is built on the previous ranks, operating on those cognitive and affective processes and using them as its agents. We see no inherent reason why ever more sophisticated paradigms cannot evolve. Thus we preach neither that history is at an end nor that we are at the apex of human development. From our point of view, the future event that some, such as Raymond Kurzweil, are calling The Singularity, is but another major transition in cultural evolution, another rank.

But, if further evolution is possible, it is not inevitable. We provide scant comfort to those who want to believe that human destiny guarantees a certain, and a certain type of, future for the nephews and nieces of their great grandchildren. Our aim has been to look at the past so that we may more deeply understand and act in the present. Our aim is not and has not been to predict the future. Rather, we could help create the concepts which our our children and their children can use in order effectively to realize their hopes and quell their fears as they venture forth on the New Savanna of cultural possibility now emerging here and there. We have examined past in order that we may help others to build the future.

Red, White, and Blue



Cycles of technology panics

Orben, A. (2020). The Sisyphean Cycle of Technology Panics. Perspectives on Psychological Science. https://doi.org/10.1177/1745691620919372
Abstract: Widespread concerns about new technologies—whether they be novels, radios, or smartphones—are repeatedly found throughout history. Although tales of past panics are often met with amusement today, current concerns routinely engender large research investments and policy debate. What we learn from studying past technological panics, however, is that these investments are often inefficient and ineffective. What causes technological panics to repeatedly reincarnate? And why does research routinely fail to address them? To answer such questions, I examined the network of political, population, and academic factors driving the Sisyphean cycle of technology panics. In this cycle, psychologists are encouraged to spend time investigating new technologies, and how they affect children and young people, to calm a worried population. Their endeavor, however, is rendered ineffective because of the lack of a theoretical baseline; researchers cannot build on what has been learned researching past technologies of concern. Thus, academic study seemingly restarts for each new technology of interest, which slows down the policy interventions necessary to ensure technologies are benefiting society. In this article, I highlight how the Sisyphean cycle of technology panics stymies psychology’s positive role in steering technologies.

* * * * *

In 1941, Mary Preston published “Children’s Reactions to Movie Horrors and Radio Crime” in The Journal of Pediatrics. The American pediatrician had studied hundreds of 6- to 16-year-old children and concluded that more than half were severely addicted to radio and movie crime dramas, having given themselves “over to a habit-forming practice very difficult to overcome, no matter how the aftereffects are dreaded” (pp. 147–148). Most strikingly, Preston observed that many children consumed these dramas “much as a chronic alcoholic does drink” (p. 167). Preston therefore voiced severe concerns about the children’s health and future outcomes: Children who consumed more radio crime or movie dramas were more nervous and fearful and suffered from worse general health and more disturbed eating and sleep.

To truly understand these claims, one needs to consider Preston’s work in the context of her time. The decade preceding her work saw both broad social and technological changes; the explosive growth in popularity of the household radio during this period, however, is especially striking. In 1922, 6,000 radios were owned by the American public; this number grew to 1.5 million by 1923, 17 million by 1932, and 44 million by 1940 (Dennis, 1998). In 1936, about nine in 10 New York households owned a household radio, and children in these homes spent between 1 and 3 hr a day listening to these devices (Dennis, 1998). This rapid rise in popularity sparked concerns not limited to Mary Preston’s article. A New York Times piece considered whether listening to the radio too much would harm children and lead to illnesses because the body needed “repose” and could not “be kept up at the jazz rate forever” (Ferrari, as cited in Dennis, 1998). Concerns voiced by the Director of the Child Study Association of America noted how radio was worse than any media that came before because “no locks will keep this intruder out, nor can parents shift their children away from it” (Gruenberg, 1935). This view was mirrored in a parenting magazine published at the time:
Here is a device, whose voice is everywhere. . . . We may question the quality of its offering for our children, we may approve or deplore its entertainments and enchantments; but we are powerless to shut it out . . . it comes into our very homes and captures our children before our very eyes. (Frank, as cited in Dennis, 1998)
In recent decades, concerns about the effects of radio on young people have practically disappeared—but societal concerns about emergent technologies have definitely not done so.

Given the option, many parents of today would enthusiastically welcome the consumption of radio dramas, especially if they would take the place of their children playing around on their phones or chatting to friends on social media. Just as was the case with the radio, academic publications and other reports now routinely liken these new digital pursuits to drug use (Royal Society of Public Health, 2017; see commentary, Przybylski & Orben, 2017). They once again raise the specter of vast proportions of the adolescent population becoming addicted to a new technology (Murali & George, 2007) and that this will have diverse and far-reaching negative consequences (Greenfield, 2014; see commentary, Bell, Bishop, & Przybylski, 2015). Although previous parents’ fears of radio addiction might seem amusing now, contemporary concerns about smartphones, online games, and social media are shaping and influencing policy around the world (Choi, Cho, Lee, Kim, & Park, 2018; Davies, Atherton, Calderwood, & McBride, 2019; Department for Digital, Culture, Media and Sport & Secretary of State for the Home Department, 2019; House of Commons Science and Technology Select Committee, 2019; Viner, Davie, & Firth, 2019; Wait Until 8th, 2018). These technology panics—times in which the general population is gripped by intense worry and concern about a certain technology—are influential and reoccurring. Current worries about new technologies are surprisingly similar to concerns about technologies that have preoccupied parents and policymakers in the past but are met with amusement today.

The similarity between concerns about the radio and social media provides a striking reminder that in every decade, new technologies enter human lives and that in their wake there will arrive widespread concerns about their effects on the most vulnerable in society. Technological advances and the concerns they engender form part of a constant cycle. Nearly identical questions are raised about any new technology that reaches the spotlight of scientific and public attention. These are then addressed by scientists, public commentators, and policymakers until a newer form of technology inspires the cycle of concern to restart. Understanding how these different spheres of academia, policy, and the public interplay is crucial to understanding how the reaction to new technologies might be improved.

In this article, I argue that people’s reactions to new technologies, and researchers’ approaches to studying them, are best understood through the lens of a comprehensive framework I have named the Sisyphean cycle of technology panics. The framework highlights the diverse actors that interact to cause technology panics to develop in repeated and almost identical cycles and outlines the consequences this has for academic and policy progress. In this article, I first examine technology panics of the past century and then move on to discuss why technology panics routinely evoke concern. I then discuss the role of politics and academia in addressing and magnifying these widespread worries, critically reflecting on the positive and negative influence of the psychological sciences. Finally, I look ahead and touch on what can be done by researchers to ameliorate or address the negative effects of this cycle of technological panics in the face of an increasingly accelerating technological revolution.
H/t Tyler Cowen.

Friday, July 3, 2020

Ada Palmer, a historian, takes a look at both the idea and reality of progress [it's complicated]

Ada Palmer, On Progress and Historical Change, Ex Urbe: History, Philosophy, books, Food & Fandom.

From Part 6 (of 6):
Few things have taught me more about the world than keeping a fish tank.

You get some new fish, put them in your fish tank, everything’s fine. You get some more new fish, the next morning one of them has killed almost all the others. Another time you get a new fish and it’s all gaspy and pumping its gills desperately, because it’s from alkeline waters and your tank is too acidic for it. So you put in a little pH adjusting powder and… all the other fish get sick from the Ammonia that releases and die. Another time you get a new fish and it’s sick! So you put fish antibiotics in the water, aaaand… they kill all the symbiotic bacteria in your filter system and the water gets filled with rotting bacteria, and the fish die. Another time you do absolutely nothing, and the fish die.

What’s happening? The same thing that happened in the first two centuries after Francis Bacon, when the science was learning tons, but achieving little that actually improved daily life. The system is more complex than it seems. A change which achieves its intended purpose also throws out-of-whack vital forces you did not realize were connected to it. The acidity buffer in the fish tank increases the nutrients in the water, which causes an algae bloom, which uses up the oxygen and suffocates the catfish. The marriage alliance between Milan and Ferrara makes Venice friends with Milan, which makes Venice’s rival Genoa side with Spain, which makes Spain reluctant to anger Portugal, which makes them agree to a marriage alliance, and then Spain is out of princesses and can’t marry the Prince of Wales, and the next thing you know there are soldiers from Scotland attacking Bologna. A seventeenth-century surgeon realizes that cataracts are caused by something white and opaque appearing at the front of the eye so removes it, not yet understanding that it’s the lens and you really need it.
progress is both a concept
and a phenomenon
So when I hear people ask “Has social progress has failed?” or “Has liberalism failed?” or “Has the Civil Rights Movement failed?” my zoomed-in self, my scared self, the self living in this crisis feels afraid and uncertain, but my zoomed-out self, my historian self answers very easily. No. These movements have done wonders, achieved tons! But they have also done what all movements do in a dynamic historical system: they have had large, complicated consequences. They have added something to the fish tank. Because the same Enlightenment impulse to make a better, more rational world, where everyone would have education and equal political empowerment BOTH caused the brutalities of the Belgian Congo AND gave me the vote. And that’s the sort of thing historians look at, all day.

But if the consequences of our actions are completely unpredictable, would it be better to say that change is real but progress controlled by humans is just an idea which turned out to be wrong? No. I say no. Because I gradually got better at understanding the fish tank. Because the doctors gradually figured out how the eye really does function. Because some of our civil rights have come by blood and war, and others have come through negotiation and agreement. Because we as humans are gradually learning more about how our world is interconnected, and how we can take action within that interconnected system. And by doing so we really have achieve some of what Francis Bacon and his followers waited for through those long centuries: we have made the next generation’s experience on this Earth a little better than our own. Not smoothly, and not quickly, but actually. Because, in my mock papal election, the dam did break, but those students who worked hard to dig their channels did direct the flood, and most of them managed to achieve some of what they aimed at, though they always caused some other effects too.
Is it still blowing up in our faces?
Yes.
Is it going to keep blowing up in our faces, over and over?
Yes.
Is it going to blow up so much, sometimes, that it doesn’t seem like it’s actually any better?
Yes.
Is that still progress?
Yes.
Why?

Because there was a baby in the bathwater of Whig history. If we work hard at it, we can find metrics for comparing times and places which don’t privilege particular ideologies. If we work hard at it, we can find metrics for comparing times and places which don’t privilege particular ideologies. Metrics like infant mortality. Metrics like malnutrition. Metrics like the frequency of massacres. We can even find metrics for social progress which don’t irrevocably privilege a particular Western value system. One of my favorite social progress metrics is: “What portion of the population of this society can be murdered by a different portion of the population and have the murderer suffer no meaningful consequences?” The answer, for America in 2017, is not 0%. But it’s also not 90%. That number has gone down, and is now far below the geohistorical norm. That is progress. That, and infant mortality, and the conquest of smallpox. These are genuine improvements to the human condition, of the sort that Bacon and his followers believed
Progress is also natural ... in the sense that the human animal is part of nature
would come if they kept working to learn the causes and secret motions of things. And they were right. While Whig history privileges a very narrow set of values, metrics which track things like infant mortality, or murder with impunity, still privilege particular values — life, justice, equality — but aim to be compatible with as many different cultures, and even time periods, as possible. They are metrics which stranded time travelers would find it fairly easy to explain, no matter where they were dumped in Earth’s broad timeline. At least that’s our aim. And such metrics are the best tool we have at present to make the comparisons, and have the discussions about progress, that we need to have to grapple with our changing world.

Because progress is both a concept and a phenomenon.

The concept is the hope that collective human effort can make every generation’s experience on this Earth a little better than the previous generation’s. That concept has itself become a mighty force shaping the human experience, like communism, iron, or the wheel. It is valuable thing to look at the effects that concept has had, to talk about how some have been destructive and others constructive, and to study, from a zoomed-out perspective, the consequences, successes, and failures of different movements or individuals who have acted in the name of progress.

The phenomenon is also real. My own personal assessment of it is just that, a personal assessment, with no authority beyond some years spent studying history. I hope to keep reexamining and improving this assessment all the days of my life. But here at the beginning of 2017 I would say this:

Progress is not inevitable, but it is happening.
It is not transparent, but it is visible.
It is not safe, but it is beneficial.
It is not linear, but it is directional.
It is not controllable, but it is us. In fact, it is nothing but us.

Progress is also natural, in my view, not in the sense that it will inevitably triumph over its doomed opposition, but in the sense that the human animal is part of nature, so the Declaration of the Rights of Man is as natural as a bird’s nest or a beaver dam.

Friday Fotos: Some of my favorites from a recent look-through






Deidre McCloskey on the importance of ideas for economic growth [Note: science and technology are NOT to be conflated]

Here's Jason Crawford's interview with economist Deidre McCloskey for the Torch of Progress Series:



At about 18:23 she expresses at idea that's important to me, the difference between science and technology:
I would agree with him [Joel Mokyr] that after 1900 science really starts to matter. Artificial fertilizers, for example, are terribly important. The Green Revolution, to take another similar case, made India into a net grain exporter in really a very few years, in about 10 years or so. So science matters a lot. We couldn't have what we're on right now without science, I agree. But before that it's mainly technology. And the problem is if people put it all into one word and they say it fast. They say science-and-technology. So you get the idea that science is really what it is. Technology is just what these silly bourgeoisie were doing making it pay off. And that's not right.

Everything around you, look around your house. It's designed, for beauty and for profit, and everything you see around you is. And that has very little to do with science on the whole. So science is nice. ... I regard myself as a scientist, but it's the craftsperson, the engineer, applying the science.
I wrote a post on that a couple of years ago, Scienceandtechnology, or, Engineers Rule!. From the post:
Science is about analyzing and describing to arrive at theories and models of how things work. The end result of a course of scientific work is an account of how some phenomenon can be explained within a given framework of laws and models. Such frameworks are likely to be elegant and compact. Newton, for example, had three laws of motion, not 57.

Engineering is quite different. Engineers use laws and models to analyze situations so that they can design a device to perform a certain task. The output of a course of engineering work is the description of that device and plans for its construction. To have any value those plans must specify something that can be constructed with known materials using known methods.

Thus when I was on the faculty at the Renssalaer Polytechnic Institute I learned that the engineering curriculum had a design stem, a series of courses required of all engineers devoted specifically to design. That is, it was not assumed that engineering graduates would somehow magically figure out how to design buildable stuff once they’d graduated and taken jobs in the “real” world. They were taught, given practice in, designing and building things.

Science isn’t like that. Scientists may design experiments, but that’s mostly a matter of logic, not of constructing something piece by piece by piece, and so forth, for 10s, 100s, or 1000s or more pieces. And yes, scientists may construct apparatus. To the extent they are doing that, they are acting as engineers. For that matter, engineers will make observations and conduct tests as part of their design work. And so they will, on occasion, act as scientists. But the overall objectives and methods, the envelope, if you will, of a scientific enterprise is different from that of an engineering enterprise.
Here's McCloskey's website. Note, for example, the following book:
Bourgeois Equality: How Ideas, Not Capital or Institutions, Enriched the World. Vol. 3 of the trilogy “The Bourgeois Era,” University of Chicago Press, 2016, 787 + xlii pp.

The book explores the reputational rise of the bourgeoisie, that is, a Bourgeois Revaluation overtaking Holland and then Britain from Shakespeare’s time to Adam Smith. It made the modern world, by giving a reason for ordinary people to innovate. The material changes—empire, trade—were shown in Bourgeois Dignity (2010) to be wholly inadequate to explain the explosion of incomes 1800 to the present. What pushed the world into frenetic innovation were the slowly changing ideas 1600–1848 about the urban middle class and about their material and institutional innovations. A class long scorned by barons and bishops, and regulated into stagnation by its very own guilds and city councils and state-sponsored monopolies, came to be treasured—at least by the standard of earlier, implacable scorn—from 1600 to the present, first in Holland and then in Britain and then the wider world. And when the Amsterdamers after 1600 or so, and the Londoners and Bostonians after 1700 or so, commenced innovating, more people commenced admiring them. The new valuation of the bourgeoisie, a new dignity and liberty for ordinary people was a change peculiar to northwestern Europe in how people applied to economic behavior the seven old words of virtue—prudence, justice, courage, temperance, faith, hope, and love. With more or less good grace the people around the North Sea began to accept the outcome of trade-tested betterment. Then people did so in Europe generally and its offshoots, and finally in our own day in China and India. Most came for the first time to regard creative destruction as just, and were courageous about responding to it, and hopeful in promoting it. Most people, with the exception of the angry clerisy of artists and intellectuals (and even them only after 1848), stopped hating the bourgeoisie as much as their ancestors had for so very long before. Many started loving it. In consequence during a century or two the northwest Europeans became shockingly richer in goods and in spirit. That is, not economics but “humanomics” explains our riches.

University of Chicago Press page for the book.

GPT3 writes code (!?) [singularity alert]




Thursday, July 2, 2020

Sunset on the West Side as seen from Hoboken


Will the Real Singularity stand up [we need a software revolution]

We all know about the coming of the Technological Superiority, that magical moment in the future when machine intelligence will outstrip human intelligence – first by bit, then by more, and then accelerating to All the Intelligence in Universe – and we will be rendered...obsolete? fodder for the machines? Whatever. If you look around on New Savanna you’ll find places here and there where I say, Nonsense! We’re living the Singularity Now! I’m not entirely serious when I say that, nor am I entirely unserious. I’m mostly tired of hearing about some Magical Moment in the Future when Shazaammm! Everything Changes.

Now it’s time to get serious.

Back in 1975 Fred Brooks published The Mythical Man-Month: Essays on Software Engineering. It’s about the difficulty of writing good software. From the Wikipedia entry:
Brooks' observations are based on his experiences at IBM while managing the development of OS/360. He had added more programmers to a project falling behind schedule, a decision that he would later conclude had, counter-intuitively, delayed the project even further. He also made the mistake of asserting that one project—involved in writing an ALGOL compiler—would require six months, regardless of the number of workers involved (it required longer). The tendency for managers to repeat such errors in project development led Brooks to quip that his book is called "The Bible of Software Engineering", because "everybody quotes it, some people read it, and a few people go by it". The book is widely regarded as a classic on the human elements of software engineering.
Such problems persist.

Here’s Alan Kay’s keynote address for OOPSLA 1997 [Object-Oriented Programming, Systems, Languages, and Applications]



One major topic is how the underlying software architecture of the web, noting the HTML is what happens when physicists (Tim Berners-Lee has an undergraduate degree in physics) design software. His general sentiment is that we’re doing it wrong.

Now, just the other day, Patrick Collison, co-founder of Stripe, gave an interview for The Torch of Progress (an online course in technological progress for high school students).



Starting at roughly 45:20 he remarks about the software:
We're still just so bad at building software. Our software sucks, our tools for building software suck, and, maybe for some deep cognitive reasons or something we can't do very synthetic better than we currently are, I hope that's not true and my strong supposition is that it's not true, and so the sense that we're somehow kind of plateauing, we're maximizing in depth as possible, it just feels misguided to me. It stills kills me that we're building, you know, MS-DOS programs and there are so many many layers beyond that which we're currently realizing.
So, 45 years after Fred Brooks argued that we don’t know what we’re doing when we build software, we’re still at it, bumbling around in the dark.

Over the same period, of course, hardware has become vastly improved – if by that you mean cheaper, more powerful, and more reliable. Hardware has done most of the heavy-lifting in the so-called computer revolution. Without this hardware smart phones, server farms, and all of the rest of it would be impossible. Of course, yes, we need software running on the hardware, but the software is clunky in comparison to the underlying hardware. Hardware is so cheap that we can get by the software that wastes CPU cycles and memory.

I don’t care so much about the CPU cycles and the memory. They’re cheap. But I do care about the time wasted in producing the software, which often doesn’t work as designed or desired. That’s where we need a revolution.

A fundamental improvement in software technology – that would constitute a real technological singularity. It has always been the case that a few superb programmers are able to craft high-quality software (given time and resources of course). We need software development technology and methods that allow programmers of medium capability to craft high quality software.

Wednesday, July 1, 2020

75% of US venture capital goes to software [are VCs betting on the wrong future?]

It's a slow day [sky, rock/wood, canoe, dog]




Piano sales are up! [lockdown hoedown!]


From the article:
When the coronavirus sequestered Americans at home and forced businesses to close, Hale Ryan braced himself for a financial winter. As the director of sales and marketing at Metroplex Piano in Dallas and a 30-year veteran of the piano business, he had seen other crises — like 9/11 and the 2008 recession — damage sales. When the lockdown began in March, Mr. Ryan said in a recent phone interview, “I thought this was going to be the final nail.”

Instead, he began to field a flood of requests for instruments. Even with his showroom closed, the economy nose-diving and the professional music world in tatters, he sold pianos.

“It’s actually been the best three months that I’ve seen in retail,” he said.

The piano market encompasses a wide range of instruments, from hand-built concert grands that cost hundreds of thousands of dollars to factory-made uprights, digital pianos and keyboards designed for young learners. The high-water mark of piano sales in America was 1909, when 364,500 new acoustic pianos were sold in the country. Since then, radio, television, recordings and instrument technology transformed the way music is created and consumed. Only about 30,000 new acoustic pianos are now sold here each year, but the number surpasses a million when all digital varieties are included. [...]

And yet interviews with nearly a dozen dealers across the country reveal surprisingly robust sales that suggest a resurgence of at-home music-making just as the live concert scene vanished. Most of the dealers noted a rise in demand for digital pianos, which allow players to channel the sound through headphones: a key feature in households where working-from-home parents share space with distance-learning children. The phenomenon seems to be part of a general pivot toward home-based recreation, along with increased demand for gym equipment and bicycles.

Indeed, a significant portion of purchasers appears to be new to the market. Tom Sumner, the president of Yamaha Corporation of America, said in an interview that he had heard from retailers that between 20 and 25 percent of sales this spring were to first-time buyers.

Cecilia Chiang, “the Julia Child of Chinese food”, turns 100



Jeanne Lawrence, San Francisco Social Diary: A Century Of Good Taste — The Life of Culinary Icon Cecilia Chiang, New York Social Diary, June 30, 2020:
EARLY CHILDHOOD IN CHINA

Born in 1920 (Chinese year of the Monkey) near Shanghai, Cecilia was raised in Beijing (which before Mao was called Peking) in a wealthy family of twelve children (nine daughters and three sons). As a child, Cecilia was not allowed in the kitchen, as two cooks prepared Shanghai-style and Northern Mandarin-style cuisine for the family. She learned about food at the dinner table, where each dish in elaborate, multi-course meals was discussed and critiqued.

Cecilia’s privileged life came to an end in 1942, when she and a sister fled the Japanese occupation with an arduous thousand-mile, six-month trek (on foot!) from Beijing to Chongqing. She resettled in Shanghai, where, as a young woman, she met her husband and raised her children, May and Philip. The family enjoyed the sophisticated and dynamic Shanghai life when the city was booming. However, that all came to an end in 1949, when she fled from China to Japan during the Communist Revolution.

STARTING A BUSINESS IN AMERICA

In 1959, Cecilia traveled to San Francisco to visit her recently widowed sister for what was meant to be a brief stay. She stayed and in l961, through a series of chance encounters, opened a Chinese restaurant on Polk Street that she named the Mandarin.

At this 65-seat “hole in the wall,” she introduced the American palate to authentic Northern Chinese cuisine from cities such as Shanghai and Beijing and the provinces of Sichuan and Hunan. Her menus were starkly different from the Americanized dishes that populated Chinese restaurants at the time, such as chop suey, chow mein, and egg foo young.

Looking back today, Cecilia says, “Maybe I was naïve about venturing into entrepreneurship in a new country, as an immigrant and in an industry dominated by men.” In her first restaurant, she wore many hats: hostess, reservationist, food procurer, waiter—even busboy! Granddaughter Siena Chiang credits her grandmother’s success to grit, luck, and “an uncanny sense for good food.”