New York Times: Tiffany Hsu

In the weeks since Elon Musk took over Twitter, dozens of people responsible for keeping dangerous or inaccurate material in check on the service have posted on LinkedIn that they resigned or lost their jobs. Their statements have drawn a flood of condolences — and attempts to recruit them. Overtures arrived from rival tech services, retailers, consulting firms, government contractors and other organizations that want to use the former Twitter employees — and those recently let go by Meta and the payments platform Stripe — to track and combat false and toxic information on the internet.

Ania Smith, the chief executive of TaskRabbit, the Ikea-owned marketplace for gig workers, commented on a former Twitter employee’s post this month that he should consider applying for a product director role, working in part on trust and safety tools.

“The war for talent has really been exceptional in the last 24 months in tech,” Ms. Smith said in an interview. “So when we see layoffs happening, whether it’s at Twitter or Meta or other companies, it’s definitely an opportunity to go after some of the very high-caliber talent we know they hire.”

She added that making users feel safe on the TaskRabbit platform was a key component of her company’s success.

“We can’t really continue growing without investing in a trust and safety team,” she said. The threats posed by conspiracy theories, misleadingly manipulated media, hate speech, child abuse, fraud and other online harms have been studied for years by academic researchers, think tanks and government analysts. But increasingly, companies in and outside the tech industry see that abuse as a potentially expensive liability, especially as more work is conducted online and regulators and clients push for stronger guardrails.

On LinkedIn, under posts eulogizing Twitter’s work on elections and content moderation, comments promoted openings at TikTok (threat researcher), DoorDash (community policy manager) and Twitch (trust and safety incident manager). Managers at other companies solicited suggestions for names to add to recruiting databases. Google, Reddit, Microsoft, Discord and ActiveFence — a four-year-old company that said last year that it had raised $100 million and that it could scan more than three million sources of malicious chatter in every language — also have job postings.

Image

The trust and safety field barely existed a decade ago, and the talent pool is still small, said Lisa Kaplan, the founder of Alethea, a company that uses early-detection technology to help clients protect against disinformation campaigns. The three-year-old company has 35 employees; Ms. Kaplan said she hoped to add 23 more by mid-2023 and was trying to recruit former Twitter employees.

Disinformation, she said, is like “the new malware” — a “digital reality that is ultimately going to impact every company.” Clients that once employed armed guards to stand outside data rooms, and then built online firewalls to block hackers, are now calling firms like Alethea for backup when, for example, coordinated influence campaigns target public perception of their brand and threaten their stock price, Ms. Kaplan said.

“Anyone can do this — it’s fast, cheap and easy,” she said. “As more actors get into the practice of weaponizing information, either for financial, reputational, political or ideological gain, you’re going to see more targets. This market is emerging because the threat has risen and the consequences have become more real.”

Read the full story here.