Saturday, July 29, 2023
Current LLMs can be tricked in ways that would never trick a real human
LLMs output impressively human-like text, but are just pattern replication algorithms trained on human-generated prose. A vivid illustration of this difference: universal prompts which reliably 'trick' LLMs to output malicious text; no human would be fooled this way. @GaryMarcus https://t.co/YY4qJCI2nQ
— Max A. Little @maxlittle@mathstodon.xyz (@maxaltl) July 27, 2023
Monday, July 24, 2023
Open Source LLMs
When language models first gained recognition, most LLMs were only accessible via APIs. Although this seemed to be a new standard, the proposal of a few notable models created hope for open-source LLMs and shifted the evolution of research in this area…
— Cameron R. Wolfe, Ph.D. (@cwolferesearch) July 24, 2023
“Academia, nonprofits… pic.twitter.com/n88QLzB0fa
Here are some links to the relevant papers:
— Cameron R. Wolfe, Ph.D. (@cwolferesearch) July 24, 2023
- GPT-NeoX-20B: https://t.co/fVe5TssMXy
- OPT: https://t.co/id2eigLUzk
- BLOOM: https://t.co/UVrWcgKK1i
For more details on the early development of open-source LLMs, check out my most recent newsletter. I am currently writing a three part series on the history of open-source LLM research!https://t.co/iGWErsCmoD
— Cameron R. Wolfe, Ph.D. (@cwolferesearch) July 24, 2023
Friday, July 21, 2023
What kind of smart device passes AP exams but can't play a decent game of tic-tac-toe?
5/5 So even when we achieve general human-level performance ("AGI"), AIs likely to still have incomparable skills to humans, and will continue to have this for a long time. They won't be "drop-in replacement" for human jobs, but labor will adapt to combine them. pic.twitter.com/sFAcj9Acy4
— Boaz Barak (@boazbaraktcs) July 20, 2023
Thursday, July 20, 2023
Strangler Fig Tree
This is a strangler fig tree, Inside this tree is a hollow space where a different tree stood
— Science girl (@gunsnrosesgirl3) July 20, 2023
The strangler fig's seeds have made their way into the canopy of a host tree and germinated. As the fig's roots grow, they cascade down the trunk,
Once they are in the ground it… pic.twitter.com/UsdUCXJGV9
Read more https://t.co/LuY8BaIZG4https://t.co/e7UIi7dVwF
— Science girl (@gunsnrosesgirl3) July 20, 2023
Monday, July 17, 2023
A speculative look at the current AI business ecology
6 months ago it looked like AI / LLMs were going to bring a much needed revival to the venture startup ecosystem after a few tough years.
— Sam Hogan (@0xSamHogan) July 16, 2023
With companies like Jasper starting to slow down, it’s looking like this may not be the case.
Right now there are 2 clear winners, a…
Saturday, July 15, 2023
Who owns the gates to culture and knowledge?
Julia Angwin, The Gatekeepers of Knowledge Don’t Want Us to See What They Know, NYTimes, July 14, 2023. From the article:
We are living through an information revolution. The traditional gatekeepers of knowledge — librarians, journalists and government officials — have largely been replaced by technological gatekeepers — search engines, artificial intelligence chatbots and social media feeds.
Whatever their flaws, the old gatekeepers were, at least on paper, beholden to the public. The new gatekeepers are fundamentally beholden only to profit and to their shareholders.
That is about to change, thanks to a bold experiment by the European Union.
With key provisions going into effect on Aug. 25, an ambitious package of E.U. rules, the Digital Services Act and Digital Markets Act, is the most extensive effort toward checking the power of Big Tech (beyond the outright bans in places like China and India). For the first time, tech platforms will have to be responsive to the public in myriad ways, including giving users the right to appeal when their content is removed, providing a choice of algorithms and banning the microtargeting of children and of adults based upon sensitive data such as religion, ethnicity and sexual orientation. The reforms also require large tech platforms to audit their algorithms to determine how they affect democracy, human rights and the physical and mental health of minors and other users.
This will be the first time that companies will be required to identify and address the harms that their platforms enable. To hold them accountable, the law also requires large tech platforms like Facebook and Twitter to provide researchers with access to real-time data from their platforms. But there is a crucial element that has yet to be decided by the European Union: whether journalists will get access to any of that data.
Do AI companies have the right to harvest anything anyone uploads to the web and use it for training their engines? [& invention?]
Ms. Loffstadt also helped organize an act of rebellion last month against A.I. systems. Along with dozens of other fan fiction writers, she published a flood of irreverent stories online to overwhelm and confuse the data-collection services that feed writers’ work into A.I. technology.
“We each have to do whatever we can to show them the output of our creativity is not for machines to harvest as they like,” said Ms. Loffstadt, a 42-year-old voice actor from South Yorkshire in Britain.
Fan fiction writers are just one group now staging revolts against A.I. systems as a fever over the technology has gripped Silicon Valley and the world. In recent months, social media companies such as Reddit and Twitter, news organizations including The New York Times and NBC News, authors such as Paul Tremblay and the actress Sarah Silverman have all taken a position against A.I. sucking up their data without permission.
Their protests have taken different forms. Writers and artists are locking their files to protect their work or are boycotting certain websites that publish A.I.-generated content, while companies like Reddit want to charge for access to their data. At least 10 lawsuits have been filed this year against A.I. companies, accusing them of training their systems on artists’ creative work without consent. This past week, Ms. Silverman and the authors Christopher Golden and Richard Kadrey sued OpenAI, the maker of ChatGPT, and others over A.I.’s use of their work. [...]
“The data rebellion that we’re seeing across the country is society’s way of pushing back against this idea that Big Tech is simply entitled to take any and all information from any source whatsoever, and make it their own,” said Ryan Clarkson, the founder of Clarkson.
Eric Goldman, a professor at Santa Clara University School of Law, said the lawsuit’s arguments were expansive and unlikely to be accepted by the court. But the wave of litigation is just beginning, he said, with a “second and third wave” coming that would define A.I.’s future.
What about A.I. invention? See Steve Lohr, Can A.I. Invent? NYTimes, July 15, 2023.
Friday, July 14, 2023
Breathing in waves: Understanding respiratory-brain coupling as a gradient of predictive oscillations
𝗥𝗲𝘀𝗽𝗶𝗿𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗯𝗿𝗮𝗶𝗻 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴:https://t.co/WREIn4Im41 pic.twitter.com/IZCL1NTES6
— Luiz Pessoa (@PessoaBrain) July 14, 2023
Note that breathing involves both top-down voluntary control and bottom-up involuntary control.
From the linked article:
Highlights
- Respiration fundamentally influences neural oscillations in animals and humans.
- Neuropsychiatric disorders are characterised by specific oscillatory profiles.
- Here, respiratory and neural aberrations are integrated to explain psychopathology.
- We propose a gradient model of respiratory-modulated prediction errors.
Abstract
Breathing plays a crucial role in shaping perceptual and cognitive processes by regulating the strength and synchronisation of neural oscillations. Numerous studies have demonstrated that respiratory rhythms govern a wide range of behavioural effects across cognitive, affective, and perceptual domains. Additionally, respiratory-modulated brain oscillations have been observed in various mammalian models and across diverse frequency spectra. However, a comprehensive framework to elucidate these disparate phenomena remains elusive. In this review, we synthesise existing findings to propose a neural gradient of respiratory-modulated brain oscillations and examine recent computational models of neural oscillations to map this gradient onto a hierarchical cascade of precision-weighted prediction errors. By deciphering the computational mechanisms underlying respiratory control of these processes, we can potentially uncover new pathways for understanding the link between respiratory-brain coupling and psychiatric disorders.
Sunday, July 9, 2023
The neural basis of color perception
As an aspiring brain scientist, my project was to discover the brain implementation of Opponent Colors Theory, the accepted explanation of color appearance. It took 20 yrs of trying (& failing) to find the corresponding neurophysiology, to realize the theory is wrong #color 1/8 https://t.co/vYPvM7y2Cy
— Bevil Conway (@BevilConway) July 7, 2023
Friday, July 7, 2023
Rethinking Covid: A COVID Dissenter Speaks Out
Glenn Loury & Jay Bhattacharya | The Glenn Show
0:00 The problem with scientific consensus
6:36 Why Jay and his colleagues were branded “fringe epidemiologists”
15:52 Jay: We need to engage with everyone—even those with mistaken beliefs
25:55 Persuading science skeptics
36:04 How do we stop COVID overreach from happening again?
46:38 Jay: Gain-of-function research is impossible to do safely v
55:03 Are some ideas too dangerous to test?
59:30 Jay: Fauci’s blunder was so catastrophic that only history can judge him
Glenn Loury and Jay Battacharya (Stanford, The Illusion of Consensus). Recorded June 23, 2023.
Monday, July 3, 2023
Sunday, July 2, 2023
Insect conscsiousness
In “What insects can tell us about the origins of consciousness”, the authors argued that even insects (like the fruit fly), has a capacity for subjective experience, because their brain can create a neural simulation of the world.https://t.co/Hw1ebrHDmGhttps://t.co/PQ77cSHNPA
— hardmaru (@hardmaru) July 3, 2023
Fruit-fly connectome
Not only have they mapped out the fruit fly brain, but they actually boot it up in a computer and made it “eat” and “groom” 🤯 https://t.co/FCDUVBn8Qi pic.twitter.com/j6Jrw5ETxR
— Amjad Masad (@amasad) July 2, 2023