Money is just the ticket, not the moat.

Meta CEO Mark Zuckerberg has invested heavily in building an "AI super team" — reportedly offering nine-figure salary packages to attract top AI researchers.

An image to describe post

However, within just two months, this dream team started facing a "talent exodus."

At least three newly recruited AI researchers resigned within a month of joining, with two returning to their previous employer, OpenAI. A senior product manager who had worked at Meta for nearly nine years also made the move to OpenAI.

An image to describe post

According to statistics, within two months of the Superintelligence Labs' establishment, at least 8 people had left, including researchers, engineers, and product managers. The high-paid star employees left immediately, putting Meta's AI team in an awkward situation after just 90 days of heavy investment.

An image to describe post

Why couldn't a nine-figure salary package keep AI talent? A large body of management studies and industry data provide the answer: salary is the door opener, but not the moat that retains talent.

An image to describe post

MIT Sloan's research on 34 million employee reviews and 1.7 million Glassdoor evaluations shows that "toxic culture" explains turnover more than salary by a factor of more than 10. Salary ranks in the middle of the influencing factors; it is not the main driver.

In other words, poor culture and management are stronger motivators for talent to leave than money.

Meta's approach to attracting AI talent was described by OpenAI CEO Sam Altman as "profit-driven" and lacking a positive team culture. Even offering sky-high salaries, if the work environment leaves AI experts disappointed, they will leave anyway.

Furthermore, the connection between salary and job satisfaction is quite limited. Classic meta-analyses show that the correlation between salary and employee satisfaction is only around 0.15.

An image to describe post

A salary increase does not linearly lead to higher loyalty or happiness.

Many companies have found that once basic material needs are met, talent places more importance on growth opportunities, recognition, and the meaningfulness of their work. A survey of developers found that "compensation" was only the fifth most important factor in keeping engineers on board.

The top factors included good culture and vision, career growth opportunities, communication climate, and work flexibility.

On the other hand, the "embeddedness" theory emphasizes that employees' retention depends on their multiple ties to the organization, not just monetary rewards.

The degree of an employee’s embeddedness in their role includes close connections with colleagues and teams (sense of link), alignment between the individual and their work/organization (fit), and what they would lose if they left (sacrifice).

Employees with higher embeddedness are less likely to be swayed by high salaries because they derive many intangible benefits from staying. On the other hand, if employees have weak interpersonal relationships and mismatched responsibilities within the organization, even a high salary is just an isolated number.

When an employee's closest colleague leaves, their likelihood of leaving increases significantly, creating a "turnover contagion" effect. This explains the chain reaction in Meta's super team, where one resignation leads to a succession of others — when the ties between the individual and the organization are weak, turnover spreads like an infectious disease.

To apply this to an AI lab: Mission and governance alignment → R&D rhythm contracts (computing power/publications/milestones) → Organizational stability (few reorganizations, clear responsibilities) → Work arrangements and geographic flexibility → Compensation structure. Nine figures can "open the door," but staying depends on the top four factors.

Instead of Meta's approach of just throwing money at the issue, we can look at Anthropic’s approach to see how culture can bind core researchers.

An image to describe post

The Information's feature describes Anthropic employees as "almost 'cultish'" in their commitment, and their cultural binding can be seen in three ways.

First, there is value screening. During interviews, candidates are not automatically accepted just because they are top talents. They are asked unconventional questions like "Who are people you respect but disagree with on values?" and "Tell us about an experience where you sacrificed profit for ethics." The success rate for behavioral interviews is very low.

In addition to technical assessments, ethical dilemmas and value conflicts are given significant weight in behavioral interviews; background checks focus not only on skills but also on character.

Thus, from the start, they select partners whose mission and values align, eliminating those who are likely to clash and leave later.

The second step is governance commitment. They engrave "safety and public interest" into their company's bylaws and power structure. Anthropic combines "Board's legal discretion (PBC) + Long-term balance of board powers (LTBT) + Hard failsafes triggered at the tech level (RSP)" into a triangle, ensuring that when models become more dangerous, the company "has the duty, the power, and the mechanism" to hit the brakes, instead of being swept away by short-term commercial pressures.

For example, after Claude Opus 4 was assessed in May 2025 as possibly reaching sensitive capabilities, Anthropic immediately applied RSP to trigger ASL-3 reinforcement before continuing.

The third step is the most "cultish," as WIRED reports. DVQ, a fixed monthly activity for the company, is referred to as a "visionary speech with religious undertones." DVQ (Dario Vision Quest) normalizes the question "why we fight," forming a shared language within the team. The management openly discusses AI risks and societal impacts, unifying the "meaning coordinate system."

An image to describe post

This means they regularly meet, much like weekly class or moral law lessons.

These three steps directly correspond to the three ties in organizational behavior: Fit (values/mission) → Link (rituals/networks) → Sacrifice (costs of breaking governance commitments).

Compared to a single salary increase, these are harder to copy externally and better at reducing "early turnover within 90 days." The results show that this approach is effective. According to SignalFire's 2025 Talent Report, based on LinkedIn data (methodology not audited), Anthropic has a 2-year retention rate of ~80%, while other companies like DeepMind have ~78%, OpenAI ~67%, and Meta ~64% (for comparison, trends only).

An image to describe post

Apple’s AI efforts have been a mess, most notably in falling behind in the AI talent war, which certainly isn’t a money issue (after all, Apple isn't short of cash)... Ultimately, it all boils down to the "culture—governance—R&D rhythm" chain. Let’s break this down into four steps.

First is the misalignment between research orientation and product orientation. According to reports, the research team wanted to publicly/open-source certain foundational models for ecosystem collaboration and academic reputation, but senior Apple executives were concerned about the brand risk of performance degradation after model compression on the end side, preferring not to disclose.

The “academic output-peer reviews-influence” that researchers care deeply about was directly challenged, which undermines their intrinsic motivation and sows the seeds of conflict.

Next, in June 2025, the new generation of Siri, originally scheduled for release, was postponed to 2026. The “time constant” between research and product delivery was further stretched, while the company evaluated replacing Siri’s core with external LLMs (OpenAI / Anthropic / Google). Internally, this was interpreted as a signal that our own model wasn’t being adopted, which severely eroded the confidence of researchers (even though it wasn't strong 🤣).

Then there were management changes, with Siri’s leadership moving from AI research head John Giannandrea's team to the software product line under Craig Federighi, with Mike Rockwell (Vision Pro head) directly leading the team. This marked a shift from "research-led" to "product-led" governance.

Adding to this was the postponement of the new Siri generation, and by June 30, 2025, Apple began evaluating whether to incorporate Anthropic/OpenAI models into Siri; by August 22, 2025, it was revealed that Apple was negotiating with Google Gemini for a custom Siri model. The shift in the research team’s goals and the direction of "whether to outsource" caused frontline employees to be unclear on “who is making decisions and what criteria are being used to make them.”

Finally, the straw that broke the camel's back was the departure of Ruoming Pang, which led to "turnover contagion." Foundational model leader Ruoming Pang was poached by Meta (with multiple reports stating a total package of over $200 million in four years), and soon after, team members followed suit or moved to OpenAI/Anthropic/Meta.

Of course, this perspective comes from the researchers’ point of view. From Apple’s consistent style, it could be seen as a rational decision to be cautious, as revealing model details might expose trade-offs on end-side performance (after compression of smaller models), conflicting with Apple’s quality-privacy-brand narrative.

Secondly, using external models might just be a transitional solution, as internal models do have their limitations... The upper management might want to accelerate Siri's capabilities using third-party models while “catching up” on self-development.

Apple hasn’t changed; what’s changed is this era...

Returning to Meta, we can analyze from multiple sources why talent leaves despite such high salaries. As mentioned earlier, many top researchers value mission and research freedom over just compensation.

Wired points out that some scientists want to shape "Artificial General Intelligence (AGI)" and ensure its safety, while Meta focuses more on short-term productization and profits, with a weaker mission drive.

Reports from TrendForce and others also mention that compared to companies like Anthropic, which are truly driving safety and long-term vision, Meta’s goals seem more utilitarian, which has led some talent to feel their "psychological contract" was broken.

Another issue, in my view, is team stability. Since Spring 2025, Meta has made four major adjustments to its AI department in six months, splitting the Superintelligence Labs into four teams: research, infrastructure, products, and long-term models.

The National CIO Review pointed out that these restructurings, followed by hiring freezes, were meant to "stabilize the structure," but they resulted in internal roles and reporting relationships changing repeatedly. Multiple leavers stated that the team was "too dynamic," with managers changing several times within a few months, creating a lack of stability.