Saturday, September 19, 2020

Facebook or freedom, Part 3: The game goes on [Media Notes, special edition #1]

It’s been 12 days since my last Facebook post, on September 7, and I’ve had at least four 48 hour reprieves since then, maybe 5 – I’ve not been keeping a strict count. What do I mean by a reprieve? I switch to a new page on FB and Wham! they hit me with the new interface without warning. But so far they’ve also given me the option of switching back, which I’ve done. When I do so, however, they always as why I’m doing it, is it because you don’t like something or want something else? No, I reply, it’s because I don’t like you arbitrarily messing with my mind.

I assume that one of these days I’m not going get that option. When, though, and why do they keep giving my 48 hours and then not following through? Do they want to keep on collecting my comment – I’ve only got one – on why I’m doing back? There’s got to be some logic here.

I’ve been watching this new docudrama on Netflix, The Social Dilemma, which I don’t much like (more on that later), which is about social media, including Facebook of course, and how the tech wizards behind the screen do everything to control us, including running little experiments where they make some change in what we see and note how we respond. That makes sense. From which it follows that my behavior is being closely monitored, not necessarily by a person – I can’t imagine that I’m important enough – but more likely by some program module.

Is that module primed for each individual user? That is, FB has been tracking our behavior on FB and across the web, right? So it’s built up some statistical profile of each person’s activity. I’d assume it’s making guesses about how we’re going to behave in various circumstances. We’ll either confirm the guess, thus confirming its (Bayesian) priors, or not, in which case it updates those priors. Either way the algorithm ‘learns’ something.

So, what kinds of guesses is FB making about my behavior with respect to this interface change? When people make a comment about why they want to keep the old interface, how do they deal with those comments? Are they looking for something in particular? I’d assume that very few – none? – get read by a human. What’s their AI-module checking for?

What’s their plan for me on the next time they switch me over? Are the going to give me the option of switching back? If so, what’s their prediction about whether or not I’ll accept the offer? If they predict that, yes, I’ll switch back, do they already know how many more times they’ll give me to stay with the old interface?

I don’t know. Just how sophisticated is their manipulation software? I don’t know that either. In a sense, neither do they.

* * * * *

And that brings me to The Social Dilemma. I’m about two-thirds of the way through, and it took me two, maybe three, sittings to get that far. Will I finish it? I don’t know, but maybe Facebook, Google, and of course Netflix know, or at least have their guesses. Maybe they’re even betting on my behavior. Now there’s a concept, have the different social networks bet on user behavior? How will the bets be paid off? Shares of stock? User information?

The Social Dilemma switches back an forth between scenes where various experts from industry and academia tell us about how social media – Facebook, Twitter, Google, Instagram, etc. – tracks and manipulates our behavior and dramatized scenes where we see people respond to the manipulations of a nefarious AI, personified by Vincent Kartheiser. I don’t feel that I’m learning anything new. Yes, yes, they’re doing all this stuff and yes, yes, they’re really good at it. But there’s no real detail, no substantial argumentation, just all this assertion sandwiched in between this cheesy dramatization, which is useless.

Coming into this program I KNEW there was a problem, tracking, manipulation, fragmentation and polarization in the civic sphere, and so forth. I was looking for something to give form and substance to all these things I already know in a loose and baggy was. The Social Dilemma isn’t doing that.

Three useful reviews:
The Oremus piece reminded me of Marcuse's notion of repressive desublimation, where an oppressive regime allows the expression of some dissent (desublimation) but not so much as to threaten in regime in any material way (repressive). It's a kind of tax the oppressors levy on the oppressed for the privilege of speaking truth to power, but nothing more than speech.

For example, and from Oremus' article:
But if The Social Dilemma largely succeeds in answering its opening question (“What’s the problem?”), there’s a second, crucial, stage-setting scene that the film seems to forget about as it goes on. It’s when the film’s central real-life figure, the “humane tech” advocate Tristan Harris, recounts how he grew disillusioned with his work as a young designer at Google, and eventually wrote an explosive internal presentation that rocked the company to its core — or so it seemed. The presentation argued that Google’s products had grown addictive and harmful, and that the company had a moral responsibility to address that. It was passed around frantically within the company’s ranks, quickly reached the CEO’s desk, and Harris’ colleagues inundated him with enthusiastic agreement. “And then… nothing,” Harris recalls. Having acknowledged ruefully that their work was hurting people, and promising to do better, Googlers simply went back to their work as cogs in the profit machine. What Harris thought had served as a “call to arms” was really just a call to likes and faves — workplace slacktivism.
But also, as an example of the film's narrow tech-bro-centric POV:
As the activist Evan Greer of Fight for the Future points out, the film almost entirely ignores social media’s power to connect marginalized young people, or to build social movements such as Black Lives Matter that challenge the status quo. This omission is important, not because it leaves the documentary “one-sided,” as some have complained, but because understanding social media’s upsides is critical to the project of addressing its failures. (I wrote in an earlier newsletter that the BLM protests should remind us why social media is worth fixing.)
* * * * *

Meanwhile, why not let people have the interface they want? In fact, why not offer several interfaces? After all, Facebook is a huge company; they could afford to do this, no?

All Facebook needs is to be able to pass ads to users, makes suggestions, and track usage. Does everyone have to have the same interface to make this happen?

But then there’s that general principle: We’ve got you under our thumb. That makes sense. I don’t like it at all – which is why I’m putting up (ultimately futile) resistance – but I understand it.

Somewhere out on the web there’s got to be images of Zuckerberg tricked out in Borg cyber gear.

No comments:

Post a Comment