From Alverado Rafael Alvarado, The Code Problem:
The first is to learn for the reason that Tim Berners-Lee exhorts journalists to learn–you need to know how to use tools to manipulate data because knowledge is increasingly produced as data, that is, in more or less structured forms all over the web. This is because the future of the humanities “lies with
journalistshumanists who know their CSV from their RDF, can throw together some quick MySQL queries for a PHP or Python output … and discover the story lurking in datasets released by governments, local authorities, agencies, digital archives, online libraries, academic centers, or any combination of them – even across national borders.” [...]
The second reason to learn to code is philosophical. You should be able to write code–not necessarily program or, God forbid, “develop”–so that you can understand how machines think. Play with simple algorithms, parse texts and create word lists, generate silly patterns a la 10 PRINT. Get a feel for what these so-called computer languages do. Get a feel for the proposition, to which I mostly assent, that text is a kind of code and code a kind of text (but with really important differences that you won’t discover or understand until you play around with code). This level of knowledge does not require any great mastery of a language in my view. It only requires a willingness to get one’s hand dirty, make mistakes, and accept the limitations of beginner’s knowledge. I personally believe that this second reason is as or more important than the first.
To get to this place with code, to be able write simple scripts that are useful or interesting or both, you don’t need to do many of the things your coding brethren think you should do. First and foremost, you don’t need to learn a specific language unless there is a compelling local reason to do so, such as being in a class or on a project that uses the language. [...]Second, you don’t need to be involved in writing a full-blown application to do DH-worthy coding. Applications are fine, and being on a collaborative project has huge benefits of its own, but know that application development is a huge time-suck and that applications are like restaurants–fun to set up but most likely to fail in the real world. Lots of DH coding projects in my experience are journeys, not destinations. [...]Third, there is no reason ever to be forced into using a specific editor or coding environment, especially if it is a difficult one that “real” coders use. [...]Beyond these specific problems, though, there is a more fundamental issue about the culture of code that contributes to the condition that Miriam [Posner] and others confront: in spite of the well-meaning desire by many coders to bring everyone into the coding fold, there is a countervailing force the prevents this from happening and which emanates from these same coders. This is the force of mystification. Mystification appears in many forms, including some of the things I just described–insisting on a difficult editor, dissing certain languages–but it more generally comes from treating code competence as a source of identity, whether it be personal or disciplinary. As long as digital humanists regard coding as a marker of prestige–and software as a token in the academic economy–and not as a means to other forms of prestige (such making discoveries or writing books), then knowledge of coding will always be hedged in by taboos and rites of passage that will have the effect of pushing away newcomers.
Addendum (7.11.16): From an interview with Pamela Fletcher in the Los Angeles Review of Books:
No, I don’t think you necessarily need to know how to code to do meaningful digital humanities work, not least because collaboration is a central part of DH work and the idea of people bringing different skill sets — and research problems — together is one of its core strengths. Yes, because as a humanist I am deeply committed to the idea that in order to communicate with other people you need to speak their language, and coding is the language of computation. In our new Digital and Computational Studies curriculum at Bowdoin we are starting from the premise that every student who goes through our program needs to understand at least the underlying logic of how computers work and the many layers of abstraction and representation that lie between binary code and what you see on the screen. This is partially about communication: you need to understand what computers are (and aren’t) good at in order to come up with intelligent computational problems and solutions. But it is also because each stage of computational abstraction involves decisions that are essentially acts of interpretation, and you can’t do meaningful work if you don’t understand that. This is equally true, of course, of anyone who uses technology, which is most of us. So I’d say ideally we should be educating all our students to be computationally literate, which is not the same as being expert programmers.