Pages in this blog

Friday, July 13, 2018

Training your mind, Michael Nielsen on Anki and human augmentation

Michael Nielsen, an AI researcher at Y Combinator Research, has written a long essay, Augmenting Long-term Memory, which is about Anki, a computer-based tool for training long-term memory.
In this essay we investigate personal memory systems, that is, systems designed to improve the long-term memory of a single person. In the first part of the essay I describe my personal experience using such a system, named Anki. As we'll see, Anki can be used to remember almost anything. That is, Anki makes memory a choice, rather than a haphazard event, to be left to chance. I'll discuss how to use Anki to understand research papers, books, and much else. And I'll describe numerous patterns and anti-patterns for Anki use. While Anki is an extremely simple program, it's possible to develop virtuoso skill using Anki, a skill aimed at understanding complex material in depth, not just memorizing simple facts.

The second part of the essay discusses personal memory systems in general. Many people treat memory ambivalently or even disparagingly as a cognitive skill: for instance, people often talk of “rote memory” as though it's inferior to more advanced kinds of understanding. I'll argue against this point of view, and make a case that memory is central to problem solving and creativity. Also in this second part, we'll discuss the role of cognitive science in building personal memory systems and, more generally, in building systems to augment human cognition. In a future essay, Toward a Young Lady's Illustrated Primer, I will describe more ideas for personal memory systems.

The essay is unusual in style. It's not a conventional cognitive science paper, i.e., a study of human memory and how it works. Nor is it a computer systems design paper, though prototyping systems is my own main interest. Rather, the essay is a distillation of informal, ad hoc observations and rules of thumb about how personal memory systems work. I wanted to understand those as preparation for building systems of my own. As I collected these observations it seemed they may be of interest to others. You can reasonably think of the essay as a how-to guide aimed at helping develop virtuoso skills with personal memory systems. But since writing such a guide wasn't my primary purpose, it may come across as a more-than-you-ever-wanted-to-know guide.

To conclude this introduction, a few words on what the essay won't cover. I will only briefly discuss visualization techniques such as memory palaces and the method of loci. And the essay won't describe the use of pharmaceuticals to improve memory, nor possible future brain-computer interfaces to augment memory. Those all need a separate treatment. But, as we shall see, there are already powerful ideas about personal memory systems based solely on the structuring and presentation of information.
The method of loci is well-known, and I'm sure you can come up with a lot of information just by googling the term. You might even come up with my encyclopedia article, Visual Thinking, where I treat it as one form of visual thinking among others.

Before returning to Nielson and Anki, I want to digress to a different form of mental training. When I was young people didn't have personal computers, nor even small hand-held electronic calculators. If you had to make a lot of calculations, you might have used a desktop mechanical calculator, a slide rule–my father had become so fluent with his that he didn't even have to look at it while doing complex multi-step calculations, or you might have mastered mental calculation.

Some years ago I reviewed biographies of John von Neumann and Richard Feynman; both books mentioned that their subjects were wizards of mental calculation. I observe:
Feynman and von Neumann worked in fields were calculational facility was widespread and both were among the very best at mental mathematics. In itself such skill has no deep intellectual significance. Doing it depends on knowing a vast collection of unremarkable calculational facts and techniques and knowing one's way around in this vast collection. Before the proliferation of electronic calculators the lore of mental math used to be collected into books on mental calculation. Virtuosity here may have gotten you mentioned in "Ripley's Believe it or Not" or a spot on a TV show, but it wasn't a vehicle for profound insight into the workings of the universe.

Yet, this kind of skill was so widespread in the scientific and engineering world that one has to wonder whether there is some connection between mental calculation, which has largely been replaced by electronic calculators and computers, and the conceptual style, which isn't going to be replaced by computers anytime soon. Perhaps the domain of mental calculations served as a matrix in which the conceptual style of Feynman, von Neumann, (and their peers and colleagues) was nurtured.
Then, citing the work of Jean Piaget, I suggest every so briefly why that might be so. However, once powerful handheld calculators became widely available, skill in mental calculation was no longer necessary. These days one may hear of savants who have such skills, but that's pretty much it.

Returning to Nielsen and Anki, as his essay evolves, he suggests that more than mere memory is at stake. After explaining Anki basics he describes how he used Anki to help him learning enough about AlphaGo–the first computer system that beat the best human experts at Go–to write an article for Quanta Magazine. Alas
I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material.
He then explains what he did. The upshot:
This entire process took a few days of my time, spread over a few weeks. That's lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise.
He then explains how he he used Anki to do shallow reads of papers. I'm not going excerpt or summarize that material, but I'll point out that doing shallow reads is a very useful skill. When I was in graduate school I prepared abstracts of the current literature for The Journal of Computational Linguistics. While some articles and tech reports had good abstracts, many did not. In those cases I'd have to read the article and write an abstract; I gave myself an hour, perhaps a bit more to write a 250-word abstract. I gave those articles a shallow read. How'd I do it? Hmmmm... I'll get back to you on that. It's quite possible that Nielsen's Anki process is better than the one I used.

Yet:
Really good resources are worth investing time in. But most papers don't fit this pattern, and you quickly saturate. If you feel you could easily find something more rewarding to read, switch over. It's worth deliberately practicing such switches, to avoid building a counter-productive habit of completionism in your reading. It's nearly always possible to read deeper into a paper, but that doesn't mean you can't easily be getting more value elsewhere. It's a failure mode to spend too long reading unimportant papers.
My process was certainly good enough to make that go-nogo decision.
Nielson then goes on to discuss this and that use of Anki, suggesting:
Anki isn't just a tool for memorizing simple facts. It's a tool for understanding almost anything. It's a common misconception that Anki is just for memorizing simple raw facts, things like vocabulary items and basic definitions. But as we've seen, it's possible to use Anki for much more advanced types of understanding. My questions about AlphaGo began with simple questions such as “How large is a Go board?”, and ended with high-level conceptual questions about the design of the AlphaGo systems – on subjects such as how AlphaGo avoided over-generalizing from training data, the limitations of convolutional neural networks, and so on.

Part of developing Anki as a virtuoso skill is cultivating the ability to use it for types of understanding beyond basic facts. Indeed, many of the observations I've made (and will make, below) about how to use Anki are really about what it means to understand something.
That's the good stuff.

Where's he going? Human augmentation:
The human-computer interaction (HCI) community has tried to achieve it in the systems they build, not just for memory, but for augmenting human cognition in general. But I don't think it's worked so well. It seems to me that they've given up a lot of boldness and imagination and aspiration in their design** As an outsider, I'm aware this comment won't make me any friends within the HCI community. On the other hand, I don't think it does any good to be silent, either. When I look at major events within the community, such as the CHI conference, the overwhelming majority of papers seem timid when compared to early work on augmentation. It's telling that publishing conventional static papers (pdf, not even interactive JavaScript and HTML) is still so central to the field. . At the same time, they're not doing full-fledged cognitive science either – they're not developing a detailed understanding of the mind. Finding the right relationship between imaginative design and cognitive science is a core problem for work on augmentation, and it's not trivial.

In a similar vein, it's tempting to imagine cognitive scientists starting to build systems. While this may sometimes work, I think it's unlikely to yield good results in most cases. Building effective systems, even prototypes, is difficult. Cognitive scientists for the most part lack the skills and the design imagination to do it well.

This suggests to me the need for a separate field of human augmentation. That field will take input from cognitive science. But it will fundamentally be a design science, oriented toward bold, imaginative design, and building systems from prototype to large-scale deployment.
* * * * *

See also my post, Beyond "AI" – toward a new engineering discipline, in which I excerpt Michael Jordan, "Artificial Intelligence — The Revolution Hasn’t Happened Yet". Jordan discusses human augmentation under the twin rubrics of "Intelligence Augmentation" and "Intelligence Infrastructure".

No comments:

Post a Comment