It makes sense that LinkedIn would be the first major social network to push AI-generated content on its users. The Microsoft-owned company is weird. It’s corporate. It’s full of workfluencer posts and engagement bait that ranges in tone from management consultant bland to cheerfully psychotic. Happily, this is the same emotional spectrum on which AI tends to operate.
LinkedIn isn’t populating its feed with AI chatbots just yet, but last week began sharing “AI-powered conversation starters” with the express purpose of provoking discussion among users. These posts are “developed” with the help of LinkedIn’s editorial team and matched with human experts who can then offer their thoughts on topics like “how to create a consistent brand voice on social media” and “how to monitor the online reach of your writing.” So far, so anodyne — like the contents of an r/askmckinsey subreddit.
The semiautomated social network is an engagement machine
But this project is a milestone nevertheless, and may herald the start of a wider revolution for the web. It’s the first time — I know of — that a major social media platform has directly served users AI-generated content to keep them engaged. And in a time of social media stagnation, from Twitter’s manifold struggles to Meta’s desperate-looking pitch for paid subscriptions, it could point to the industry’s future: to the semiautomated social network.
It’s true, of course, that social media has been steering users’ attention using AI ever since the invention of the algorithmic feed. As soon as Facebook, Instagram, Twitter, et al. began ranking users’ content using opaque metrics, they became human-machine hybrids — shaping our actions to keep us stimulated and engaged. But there is a difference between this sort of intervention and directly sharing AI-generated content, not least because companies now have the opportunity to flood the zone with this stuff in a way that simply wasn’t possible even a few years ago. “Generative AI” may be the most overhyped trend of 2023, but it’s not without good reason. We now have AI tools that can generate endless spools of imagery, video, music, and text, while the social media sites have all the user data they need to train these systems. Why not plug one into the other?
It’s not hard to imagine how a semiautomated social network might operate. Other than serving users AI-generated content, you might create fake users in the form of AI chatbots to needle, encourage, and coddle your user base. Maybe, to begin with, you only deploy bots to contain problematic users: an idea known as heavenbanning, in which trolls can only interact with chatbots who mollify them by agreeing with everything they say. (The concept was invented by Twitter user @nearcyan.) But then, maybe, when your monthly user numbers start dropping and the quarterly earnings aren’t looking so good either, you decide to let more bots connect to the general populace. “It’s a proven way to increase positive interactions among users!” you write in your press release. “We’re giving people what they want: quality personalized content at scale. Never be bored in our AI playpen.”
And hey, it might be popular, too. There’s no reason to think people wouldn’t enjoy a social network populated by bots. (They enjoy Twitter, after all.) Many of us already treat social media as a game; forming alliances, brigading enemies, and racking up points in our metric of choice. It might be reassuring to know that the bot-backed pile-on you’ve initiated is only targeting another computer program, whose live-streamed breakdown is, you assure yourself, purely AI generated. And why bother cultivating human friendships in online spaces when the chatbot equivalent offers more leniency and less friction? If digital relationships are equivalent to IRL ones, does it matter if your friends are bots? And look, if I sign up for the BotFriend+ package I even get random Amazon gifts in the mail!
This is one possible future, anyway; more likely any automation will be more subtle than this. As these changes take place, though, it will be the end of social media as it was originally conceived — as a place to share news and thoughts with real people — and the start of a new form of online entertainment.
Arguably, this transition is already happening. One of the most popular uses of consumer AI is creating chatbots based on fictional characters on platforms like Character.AI and NovelAI. Users spend hours honing AI versions of favorite superheroes or video game characters and then just… chat with them, for hours at a time; another form of fandom. The ability of these systems to keep users engaged is unarguable, too. Just look at what happened when Microsoft released its Bing chatbot. The bot lied to people, insulted them, manipulated them, and they loved it. Or there was the case of virtual companion chatbot Replika. When the company behind the bot removed its ability to engage in romantic roleplay — a feature it advertised as a replacement for human relationships — moderators on the app’s subreddit had to pin links to mental health resources to help users in distress. For a more ruthless company, that sort of engagement would be an opportunity.
The current giants of the online world have noticed this shift already. Just last month, Snap launched its My AI chatbot powered by ChatGPT, and yesterday, Discord said it would use ChatGPT to improve the conversational abilities of its Clyde chatbot. Meta, too, seems to be developing similar features, with Mark Zuckerberg promising in February that the company is exploring the creation of “AI personas that can help people in a variety of ways.” What that means isn’t clear, but Facebook already runs a simulated version of its site populated by AI users in order to model and predict the behavior of their human counterparts.
But introducing chatbots to these platforms may be their death knell, too. I recently read a blog post by musician and writer Damon Krukowski in which he compared Spotify’s AI DJ feature to the rise of digital projectors in cinemas: a tool that was supposed to automate human labor but instead led to a drop in screen quality and, suggests Krukowski, a decline in cinema attendance. “Eliminate the labor,” he writes, “and you will eventually eliminate the spaces for which those jobs were created.” Maybe if we eliminate the labor involved in social networks — which is the job of posting, always posting — our role in these platforms will disappear as well. Let the AIs argue, then. I’ll announce my retirement on LinkedIn.
Leave a Reply