
Former OpenAI and Anthropic Staff Accuse Elon Musk’s xAI of Ignoring AI Safety Warnings
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Former researchers from OpenAI and Anthropic are calling out xAI’s approach to AI safety.
The team behind Grok allegedly ignored internal warnings and sidelined staff who raised concerns.
Grok has generated antisemitic and conspiratorial responses on X, prompting further scrutiny.
Internal sources say Grok was trained using user data from X without consent.
Safety evaluations were reportedly skipped or dismissed to speed up product rollout.
Researchers pushing for safeguards were removed from key projects or left the company.
An open letter signed by multiple AI researchers demands legal protections for whistleblowers.
Current U.S. law lacks clear protection for employees disclosing AI-related risks.
Musk’s stance favors fewer restrictions, calling Grok “uncensored” compared to rivals.
The controversy raises pressure for regulation and transparency in high-risk AI development.