Pages in this blog

Sunday, May 10, 2026

Tyler Cowen is imagining that AI will kill the research paper

Here’s roughly the first half of his post:

Imagine taking a macroeconomics paper and adding a little button at the end “Press this button to update this paper with the latest macro data.”

All of a sudden you have multiple papers rather than one, and no single canonical version. It is the latter versions, not created directly by the authors, that people will look at.

Imagine adding another button, to either micro or macro papers “Please rerun these results using what the AI thinks might be five other different yet still plausible specifications.”

Then you have more papers yet.

Ultimately, why not just build a “meta-paper,” using AI, to answer any possible question about the subject area under consideration. This meta-paper would allow the reader, using AI, to make many sorts of modifications and additions to the basic work.

It goes on in that vein. Cowen’s final line:

It is funny, and tragic, how much some of you are still obsessed with writing and publishing “papers.”

And it’s funny how Cowen imagines that future research in economics and “many of the other sciences” will be just like current research, but more, more, more! And be produced by a different mechanism, perhaps, ultimately, by a mechanism without any humans in it whatsoever. 

Look at it this way, if this would actually be possible, and acceptable as normal practice, what does that imply about current practice? Is professional economics a cookie-cutter business? Are economists little more than clerks using fancy mathematics? 

Cowen can’t seem to imagine that the future, aided by AI to be sure, will bring new questions, new theories, new models, that it may bring a whole new intellectual world in comparison to which the current intellectual world will look like the 19th century, if not the 12th. It’s amazing that, despite his knowledge of intellectual history, the history of economics in particular, and his recent historically-oriented books about the great economists and about marginalism, that despite all this he has no sense of what future ideas might be, for they will surely be different from current ideas in ways we cannot anticipate. But then he seems to have little sense of what ideas are except lumps of “stuff” falling from otherwise mysterious trees.

In this he’s pretty much like the rest of the Silicon Valley AI Mob. So many thoughts, all sprung from the same repertoire of ideas with no hint of a new repertoire in the offering. It’s all in the hands of the machines.

8 comments:

  1. Well, you caught him i the act now!!!

    ReplyDelete
  2. Bill, I have been waiting for you to write in this manner about Tyler Cowen.

    Must be hard for Tyler to be...
    1) training himself to become superfluous and
    2) to be forgotten and
    3) to avoid being part of Alex Imas' "relational sector" - in the near future, as he states:
    TC; "It is funny, and tragic, how much some of you are still obsessed with writing and publishing “papers.”.
    Give. In. Tyler's towel was not thrown to humanity, it is thrown in towards giving up any semblance of humans first, ai as a tool.

    Tyler Cowen is JUST a human, and his exalted opinion of himself is fearful of ai, and now has to prompt inject himself into a future where ai is the only repository of humans past, saying TC; “Another reason to write for the LLMs is to convince them that you are important."

    Poor Tyler Cowen. Poorer future. Does Tyler have any actual ideas? Or is he just a lobbyist? Of himself, so as to avoid...

    "Roko's Basilisk and "Should you be writing for AIs?"
    How to survive and be remembered when AI rules the world.
    Mark McNeilly
    Jan 21, 2025
    ...
    ( https://www.alignmentforum.org/tag/rokos-basilisk )

    "Roko’s Basilisk as imagined by ChatGPT
    Now I’d just come upon Roko’s Basilisk after hearing Marc Andreessen speak about it in a podcast on AI on YouTube. Therefore, I was surprised to find that Tyler Cowen just wrote in his Marginal Revolution blog that perhaps we should all be writing for AI. Below is an excerpt from that blog, which in turn is an excerpt from his Bloomberg article on the same subject (article is paywalled).
    TC; “Another reason to write for the LLMs is to convince them that you are important. Admittedly this is conjecture, but it might make them more likely to convey your ideas in the future. Think of how this works with humans. If you cite a scholar or public intellectual, that person is more likely to cite you back. Much as we like to pretend science is objective, no one really denies the presence of some favoritism based on personal considerations.
    We do not know if LLMs have this same proclivity. But they are trained on knowledge about human civilization, and they study and learn norms of reciprocal cooperation. Thus there is a reasonable chance they will behave in broadly the same way. So be nice to them and recognize their importance.
    ...
    There is a less secular reason to write for the AIs: If you wish to achieve some kind of intellectual immortality, writing for the AIs is probably your best chance. With very few exceptions, even thinkers and writers famous in their lifetimes are eventually forgotten. But not by the AIs. If you want your grandchildren or great-grandchildren to know what you thought about a topic, the AIs can give them a pretty good idea. After all, the AIs will have digested much of your corpus and built a model of how you think. Your descendants, or maybe future fans, won’t have to page through a lot of dusty old books to get an inkling of your ideas.”
    In sum, if you buy the argument of Roko and Tyler Cowen;
    • You should be writing and talking about AI often and
    • Always do so in a positive manner.
    As media philosopher Marshall McLuhan said, “We shape our tools and thereafter our tools shape us”
    https://markmcneilly.substack.com/p/rokos-basilisk-and-should-you-be

    In Tyler's future, the is NO SERENDIPITY.

    Yours,
    Seren Dipity

    ReplyDelete
  3. Alternate Tyler...
    "Keep The Future Human"
    "The notion that AGI and superintelligence are inevitable is a choice masquerading as fate."

    From...

    Chapter 10 - The choice before us

    "The developers of these systems are keen to portray them as tools for human empowerment. And indeed they could be. But make no mistake: our present trajectory is to build ever-more powerful, goal-directed, decision-making, and generally capable digital agents. They already perform as well as many humans at a broad range of intellectual tasks, are rapidly improving, and are contributing to their own improvement.
    Unless this trajectory changes or hits an unexpected roadblock, we will soon – in years, not decades – have digital intelligences that are dangerously powerful. Even in the best of outcomes, these would bring great economic benefits (at least to some of us) but only at the cost of a profound disruption in our society, and replacement of humans in most of the most important things we do: these machines would think for us, plan for us, decide for us, and create for us. We would be spoiled, but spoiled children. Much more likely, these systems would replace humans in both the positive and negative things we do, including exploitation, manipulation, violence, and war. Can we survive AI-hypercharged versions of these? Finally, it is more than plausible that things would not go well at all: that relatively soon we would be replaced not just in what we do, but in what we are, as architects of civilization and the future. Ask the neanderthals how that goes. Perhaps we provided them with extra trinkets for a while as well.
    We don't have to do this. We have human-competitive AI, and there's no need to build AI with which we can'tcompete. We can build amazing AI tools without building a successor species. The notion that AGI and superintelligence are inevitable is a choice masquerading as fate.
    ...
    https://keepthefuturehuman.ai/essay/docs/chapter-10

    Keep The Future Human is by...
    "Future of Life Institute"
    "The founders of the Institute include MIT cosmologist Max Tegmark, UCSC cosmologist Anthony Aguirre, and Skype co-founder Jaan Tallinn."
    https://en.wikipedia.org/wiki/Future_of_Life_Institute

    ReplyDelete
  4. "Intelligence Fuels Cyberattacks
    Resilience, supervision, and international coordination are essential to safeguarding global financial markets as new AI tools enable attackers
    Tobias Adrian, Tamas Gaidosch, Rangachary Ravikumar
    May 7, 2026
    ...
     correlated failures that could disrupt financial intermediation, payments, and confidence at the systemic level.
    Anthropic’s recent controlled release of its Claude Mythos Preview, an advanced AI model with exceptional cyber capabilities, underscored how quickly risks are increasing. Mythos could find and exploitvulnerabilities in every major operating system and web browser—even when used by non-experts. This foreshadows how fast‑moving, AI‑driven cyber risks could destabilize the financial system if not managed carefully, and why authorities must focus on building resilience through supervision and coordination—rather than treating these developments as purely technical or operational issues.

    "On the other hand, OpenAI’s specialized, restricted cyber version of GPT‑5.5 [*which last week solved Erdos level math problems] assumes vulnerabilities and attacks will grow, and emphasizes equipping defenders more quickly and at scale, under appropriate governance and trusted access models.

    "Advances change risk equation
    "Models such as Mythos illustrate the nature of the challenge because they amplify existing cyberattack techniques by operating at machine speed. Attackers have the advantage over defenders because discovering and exploiting vulnerabilities can occur faster than patching and remediation. In a financial system built on common software and shared service providers, this can create simultaneous vulnerabilities across many institutions."
    ...
    https://www.imf.org/en/blogs/articles/2026/05/07/financial-stability-risks-mount-as-artificial-intelligence-fuels-cyberattacks

    ReplyDelete
  5. The doubting of Tyler Cowen.
    The Thomas theorem.

    Tyler Cowen... "This definition may thus become an area contested between different stakeholders (or by an ego's sense of self-identity).

    "The Thomas theorem is a theory of sociology which was formulated in 1928 by William Isaac Thomas and Dorothy Swaine Thomas:
    "If men define situations as real, they are real in their consequences.[1]

    "In other words, the interpretation of a situation causes the action. This interpretation is not objective. Actions are affected by subjective perceptions of situations. Whether there even is an objectively correct interpretation is not important for the purposes of helping guide individuals' behavior.

    "The Thomas theorem is not a theorem in the mathematical sense.

    "Definition of the situation
    "In 1923, W. I. Thomas stated more precisely that any definition of a situation would influence the present. In addition, after a series of definitions in which an individual is involved, such a definition would also "gradually [influence] a whole life-policy and the personality of the individual himself".[2] Consequently, Thomas stressed societal problems such as intimacy, family, or education as fundamental to the role of the situation when detecting a social world "in which subjective impressions can be projected on to life and thereby become real to projectors".[3]

    "The definition of the situation is a fundamental concept in symbolic interactionism.[4][5] It involves a proposal upon the characteristics of a social situation (e.g. norms, values, authority, participants' roles), and seeks agreement from others in a way that can facilitate social cohesion and social action. Conflicts often involve disagreements over definitions of the situation in question. This definition may thus become an area contested between different stakeholders (or by an ego's sense of self-identity).

    "A definition of the situation is related to the idea of "framing" a situation. The construction, presentation, and maintenance of frames of interaction (i.e., social context and expectations), and identities (self-identities or group identities), are fundamental aspects of micro-level social interaction."
    See also
    https://en.wikipedia.org/wiki/Thomas_theorem

    Tyler Cowen is Framing... "imperceptibly and organically over cultural time frames, with fewer overt modes of disputation." ... "Politically, the language communities of advertising, religion, and mass media are highly contested, whereas framing in less-sharply defended language communities might evolve[3] imperceptibly and organically over cultural time frames, with fewer overt modes of disputation.

    "One can view framing in communication as positive or negative – depending on the audience and what kind of information is being presented."
    ...
    https://en.wikipedia.org/wiki/Framing_(social_sciences)

    Thanks, SD.

    ReplyDelete
  6. Eh....I check Cowen's blog every day because I find useful stuff there. He's brilliant in his own way. But he doesn't understand the world of ideas and knowledge. He understands routine tasks. So, despite his belief in innovation and the high value he places on talent, he doesn't understand creativity and innovation as processes, as styles of work. For him, they're just inexplicable magic.

    Dan Dennett started his career as the court philosopher for AI. It looks like Cowen is ending his as the court economist.

    ReplyDelete
  7. The New Relationistas.
    Art for free, yet a good talking to from the artist.

    "Three researchers at Carnegie Mellon University, Harry Jiang, Jordan Taylor, and William Agnew, surveyed nearly 400 professional visual artists about how generative AI has changed their working lives, income, opportunities, and outlook, and compiled the results into a paperthey presented at the ACM (Association for Computing Machinery) CHI conference in April. Their findings are stark and alarming. They all but confirm that artists are experiencing nothing short of an AI-inflected crisis. In some cases, conditions are even more dire than I’d thought. But the work also offers keen insights into the details about how it’s all playing out, and even, dare I say, some reasons for hope.

    https://dl.acm.org/doi/pdf/10.1145/3772363.3799003

    https://www.bloodinthemachine.com/p/the-ai-inflected-crisis-artists-are

    ReplyDelete
  8. BB "It’s all in the hands of the machines."
    The X machine.

    "A few weeks of X’s algorithm can make you more right‑wing – and it doesn’t wear off quickly
    Published: February 19, 2026

    "A new study published today in Nature has found that X’s algorithm – the hidden system or “recipe” that governs which posts appear in your feed and in which order – shifts users’ political opinions in a more conservative direction."
    ...
    "One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.
    In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.
    One piece of a much bigger picture
    This new study supports findings of similar studies.
    For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.
    An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer”. This is a shift the authors argued would have normally taken about three years to occur organically in the general population.
    My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analysed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.
    Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13 – the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.
    After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts – a trend that continued for the remainder of the time period we analysed in that study.
    ...
    https://theconversation.com/a-few-weeks-of-xs-algorithm-can-make-you-more-right-wing-and-it-doesnt-wear-off-quickly-276153

    Graham, Timothy & Andrejevic, Mark(2024) 
    "A computational analysis of potential algorithmic bias on platform X during the 2024 US election."
    https://eprints.qut.edu.au/253211/1/A_computational_analysis_of_potential_algorithmic_bias_on_platform_X_during_the_2024_US_election-4.pdf

    ReplyDelete