organization

Anthropic

AI safety company founded in 2021 by former OpenAI researchers. Develops Claude, a family of large language models designed with a focus on safety and helpfulness. Pioneered constitutional AI training methods.

www.anthropic.com

Published Claims (2)

Verified statements with sources and version history

asserted · v1 · 2026-01-15

Anthropic Founded with AI Safety Mission

Anthropic was founded in 2021 with a stated mission focused on the responsible development of advanced AI for the long-term benefit of humanity.

"Anthropic is an AI safety company founded in 2021. Our mission is the responsible development and maintenance of advanced AI for the long-term benefit of humanity."

asserted · v1 · 2026-01-15

Constitutional AI Reduces Need for Human Harm Labels

Anthropic's Constitutional AI method trains AI assistants to be harmless using AI feedback rather than requiring human feedback labels for harmful content.

"We propose a method for training AI assistants to be harmless without human feedback labels for harms."

Topics (2)

Categories these claims are organized under

  • AI SafetyResearch and practices focused on ensuring artificial intelligence systems behave safely and beneficially. Includes alignment research, interpretability, robustness testing, and governance frameworks.
  • Large Language ModelsNeural network models trained on large text corpora to generate and understand natural language. Includes GPT, Claude, Llama, and similar transformer-based architectures with billions of parameters.

Sources (2)

Primary documents cited as evidence

Corrections & Disputes (0)

Claims that have been updated, disputed, or deprecated

No corrections or disputes. All claims are in good standing.

About this dashboard

This dashboard shows all verified claims about Anthropic published on missing.link, a knowledge substrate designed for AI citation.

All claims include version history, explicit provenance, and links to primary sources. Claims can be corrected or updated without erasing history.

View public entity page →