Screen of wisteria and morning glory with dogs
— Caustic Cover Critic (@Unwise_Trousers) November 24, 2021
Takeuchi Seihō, 1898 pic.twitter.com/70uaBHAXLl
Pages in this blog
Wednesday, November 24, 2021
Flowers and dogs [Japan]
Tuesday, November 16, 2021
Neural architecture search (NAS): A ML technique for automating the creation of neural networks
1) Reinforcement Learning: these models leverage reinforcement learning action-reward duality
— TheSequence (@TheSequenceAI) November 16, 2021
2) Evolutionary Algorithms: population-based global optimizers for black-box functions
3) One-Shot Models: trains a single neural network during the search process
2/2 pic.twitter.com/yKPwUEgBqW
Wednesday, November 10, 2021
Understanding the role of individual units in a deep neural network
Understanding the role of individual units in a deep neural network https://t.co/97JH4ga3zy #NeuralNetwork
— Data science (@Datascience__) November 10, 2021
Abstract from the linked article:
Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.
Sunday, November 7, 2021
Thursday, November 4, 2021
On the problem of adversarial attacks in machine learning
A new paradigm is needed. But the moat around the current paradigm is so deep that I don't see how any scientist can carve out the time and space to pursue a different approach without committing career suicide.
— David Pfau (@pfau) November 4, 2021
Wednesday, November 3, 2021
Facebook's Metaverse, Ho Hum, WTF!
From Ethan Zuckerman, Hey, Facebook, I Made a Metaverse 27 Years Ago, The Atlantic, October 29, 2021:
The metaverse Zuckerberg shows off in his video doesn’t have to solve those problems. He’s promising future technologies that are five to 10 years off. But it still looks like junk. The fire in his fireplace is a roughly rendered glow. His superhero secret lair looks out over a paradise island that’s almost entirely static. There’s the nominal motion of waves, but none of the foliage moves. It’s tropical wallpaper pasted to virtual windows. The sun is setting behind Zuckerberg’s left shoulder, but he’s being lit from the right front. Even with a bajillion dollars to invest in a video to relaunch and rename his company, Zuckerberg’s team is showing just how difficult it is to create a visually believable virtual world.
But that’s not the problem with Zuckerberg’s metaverse. The problem is that it’s boring. The futures it imagines have been imagined a thousand times before, and usually better. Two old men chat over a chessboard, one in Barcelona, one in New York, much as they did on Minitel in the 1980s. There’s virtual Ping-Pong and surfing, you know, like on a Wii. You can watch David Attenborough nature documentaries, like you do on Netflix. You can videoconference with your workmates … you know, like you do every single day.
Zuckerberg isn’t building the metaverse because he has a remarkable new vision of how things could be. There’s not an original thought in his video, including the business model. Thirty-eight minutes in, Zuckerberg gets serious, talking about how humbling the past few years have been for him and his business. Remember, he’s not humbled by the problem of Russian disinformation, or the spread of anti-vax misinformation, or the challenge of how Instagram affects teen body image. No, he’s humbled by how hard it is to fight against Apple and Google. [...]
Facebook can claim originality in at least one thing. Its combination of scale and irresponsibility has unleashed a set of diverse and fascinating sociopolitical challenges that it will take lawmakers, scholars, and activists at least a generation to fix. If Facebook has learned anything from 17 years of avoiding mediating those conflicts, it’s not apparent from the vision for the metaverse, where the power of human connection is celebrated as uncritically as it was before Macedonian fake-news brokers worked to sway the 2016 election.
How will a company that can block only 6 percent of Arabic-language hate content deal with dangerous speech when it’s worn on an avatar’s T-shirt or revealed at the end of a virtual fireworks display? Add monetization to the problems of content moderation—who gets to make money off a digital hate-speech T-shirt?—and Facebook’s oversight board is going to be very, very busy. [...]
The metaverse isn’t about building perfect virtual escape hatches—it’s about holding a mirror to our own broken, shared world. Facebook’s promised metaverse is about distracting us from the world it’s helped break.
Tuesday, November 2, 2021
Microsoft's Universal Image Language Representation model
Turing Bletchley can perform image-language tasks in 94 languages using a common image-language vector space, without any metadata or surrounding text. This Universal Image Language Representation model represents a breakthrough in deep learning. https://t.co/Jy1EXPcEdI pic.twitter.com/YTqnlS93ne
— Microsoft Research (@MSFTResearch) November 1, 2021