First, a quick review of yesterday’s post, TALENT SEARCH: Generating leads, qualifying them, and closing [#EmergentVentures]. Second, a remark about the relevance of measures of precision and recall in text retrieval are relevant to the problem.
Generating leads, qualifying them, closing the deal (LQC)
Yesterday I argued that sales and searching for talent are alike in that both involve the search through a large population for a few items. In the case of sales you’re looking to close deals, to find people who will buy what you’re selling. In the case of talent search you’re looking to find people who can do something you value, whatever that is.
Given that, I suggested we examine talent search through the lens of sales process: generating leads, qualifying them, closing the deal (LQC). I then explored this suggestion by considering three cases: 1) finding the best sprinters in a city. 2) finding the best athletes in a city, 3) finding “breakthrough individuals”, if you will, in any discipline in the United States (the MacArthur Fellows program). I ended with an exercise for the reader: Examine the Emergent Ventures program in these terms.
What emerged from this exercise, at least I think that’ what emerged (I’m still thinking about it), is that as the criterion for judging a winner becomes more complex and subtle the time and effort devoted to applying the criterion tends to take over the whole process (not quite how I stated it yesterday). In the first case (sprinters) we can all-but ignore judging in generating leads and use and quick proxy for qualification. There is no ‘real’ judging until closing (running them through heats) and the criterion for judgment is simple and straight-forward (best time over distance). In the second case (best athletes) we have a problem specifying the relevant population at the leads phase, proxy measures are more complex (requiring more skilled judges), and the criteria for final judgment (closing) are deeply problematic.
In the last case (MacArthur Fellows) more or less the full suite of judgment criteria are in play through the whole process. There really is no explicit process for generating leads, but we can think of it as being implicit in the choice of anonymous nominators for a given year. Those nominators then find candidates that they nominate to the foundation, supplying the foundation with preliminary information about them. The nominators applying their own criteria for “breakthrough individual” in making their selection and the foundation then devotes most of its efforts to applying the foundation’s current sense of things to the candidates.
If I were to undertake an economic analysis of this process, I’d want to know the proportion of search resources that are devoted to each of lead generation, qualifying candidates, and closing on winners in each of those three cases. In the case of the MacArthur Fellows program, I note that, in effect, the foundation devotes ALL of its resources to the applying judgment criteria in the closing phase. How does it manage this? They externalize the costs of generating leads and qualifying them: the nominators are not paid.
An exercise for the reader: How does Emergent Ventures distribute its resources over the phases of LQC?
Precision and recall in document search
What can library science contribute to thinking about this problem? A central problem goes like this: We have a large, even a VAST collection of documents. Users of the collection want to find documents relevant to some particular interest. What’s the most efficient way of doing this?
That’s the same problem we’ve seen in sales and in talent search: searching a large collection of items (documents or people) for a few items of interest.
In the old days, before computers, we had card catalogues. Card catalogues had scads of small narrow drawers filled with cards, each listing an individual document along with some basic information about that document, including its location in the library. Typically one would find catalogues where items are listed alphabetically by: 1) author (last) name, 2) title of the item, and 3) subject (according to some standard system). In our LQC model, think of the catalogue as a tool for generating leads. If you know a specific title or titles you are seeking, that knowledge serves to qualify items. The same with author names. You then consult the relevant catalogue drawers to close in on their locations in the library stacks. This kind of search is relatively efficient.
If you don’t have the names of authors or specific titles, things get more complicated. Obviously you can search the title catalogue. You success here depends on both the structure of the subject classification system and your knowledge of that structure. If your knowledge is poor, you may want to consult a librarian, who will know the system far better than you do.
Things changed when computers became widely available and library catalogues became computerized, something that happened during my student years. Of course you could search a computerized catalogue in the same ways as the physical card catalogue, by author, title, and subject. It’s subject search that interests me.
Computer scientists soon realized that one could conduct subject matter searches in more interesting ways than simply trolling through a subject-matter tree. You could, for example, do Boolean searches over keywords. Cool, but not very. I have something else in mind.
Why don’t you simply enter a search query in whatever form you choose and, in effect, tell the system: Get me one like that? For that purpose computer scientists – I’m thinking in particular of Gerald Salton – developed ways of representing texts as vectors in a high dimensional space. A query then becomes such a vector and the system searches the vectors of items in the system for matches.
But I’m not interested in how that works – well I am, but not here and now. I’m interested in measuring how effective a given query is. That’s where we talk of precision and recall.
Assume some collection of documents. Somewhere in that collection is a set of documents relevant to a query. A perfect query fill find only documents in that set, and is thus very precise. It will also find all the documents in that set, and thus has maximum recall. Definitions, from Wikipedia (edited):
Precision: the fraction of retrieved documents that are relevant to the query.Recall: the fraction of the relevant documents that are successfully retrieved.
Extensive experimentation has shown that creating a system in which queries are both very precise and have high recall. If you want high recall, set your parameters loosely. You’ll retrieve all or most of the documents that are relevant to the interest motivating your query, but you are also likely to retrieve many documents of little of now interest. If you don’t want to waste time trolling through useless documents you can set the parameters for high precision. But then you may miss some relevant documents, though most or all of the ones you retrieve will be of value.
We can now see how this is relevant to talent search. Our measures of precision and recall apply at the qualification phase. If the qualification criteria are set loosely, you may well identify a very large percentage of viable candidates, but you’re going to have to search through a lot of duds to identify them. If the qualification criteria are set tightly then you won’t have to examine a bunch of losers, but you’re likely to miss some superb candidates.
An exercise for the reader
How would you apply the concepts of precision and recall to the MacArthur Fellows process and to the Emergent Ventures process? Hint: Tyler Cowan asks us to “think of Emergent Ventures as a bet on my own personal judgment.” Think of his statements about what he’s looking for as search queries. What space is he searching?
No comments:
Post a Comment