
Fei-Fei Li
Computer scientist, Stanford professor, co-founder of Stanford HAI and AI4ALL, and co-founder and CEO of World Labs
of 100 · improving trend · Visibly decent and improving
Standing
46/100
Raw Score
39/85
Confidence
66%
Evidence
Strong
About
Li's public record pairs major scientific contribution with sustained work to widen access to AI and push human-centered governance, though the 2018 Project Maven episode remains a real integrity caution.
The observable pattern is constructive and public-minded, especially in education, inclusion, and health-facing AI, but belief and worship dimensions are mostly private or undocumented and one controversy complicates trust signals.
Five Pillars
Pillar scores (0–100%)
Li's public record shows repeated institution-building for inclusion and human-centered governance, but the available evidence on faith, worship, and direct material relief is thin, and the Project Maven episode remains a real integrity limitation.
Goodness over time
Starts at 100 at birth, natural decay after accountability age, timeline events adjust the trajectory.
17 Criteria Scores
Individual item scores (0–5) with evidence notes
Core Worldview
No strong public evidence of explicit theistic belief was found.
Her public ethics language shows accountability, but not explicit eschatological belief.
Her work often frames AI as morally consequential, but not through explicit faith claims.
No clear public evidence found.
No clear public evidence found.
Contribution to Others
Family support during immigration is visible, but later evidence is limited.
AI4ALL and pipeline work repeatedly serve younger people with fewer opportunities.
Public record shows indirect structural help more than direct relief to the poor.
Inclusion work helps outsiders enter elite fields, but evidence is still indirect.
Mentorship and education initiatives create direct pathways for applicants and students.
Her diversity and public-research advocacy push against exclusion and concentration of power.
Personal Discipline
No reliable public evidence found.
No reliable public evidence found for disciplined obligatory giving.
Reliability
Long-term institution building supports a positive score, but Project Maven prevents a stronger one.
Stability Under Pressure
Early immigrant hardship is well documented.
Her memoir-related interviews describe durable persistence under family and identity strain.
She remained publicly engaged after the 2018 ethics backlash, but with mixed trust signals.
Timeline
Key events and documented turning points
Immigrated to the United States and helped support her family through hardship
As a teenager, Li moved from China to New Jersey with her family, learned English while staying in school, and worked in restaurants and in her parents' dry-cleaning business to help the family stay afloat.
→ This period provides strong public evidence of resilience and filial responsibility under financial pressure.
highHelped launch ImageNet as a foundational open academic resource
Li's work on ImageNet created a benchmark dataset that helped accelerate modern computer-vision research and showed a long-horizon commitment to shared scientific infrastructure rather than only private advantage.
→ The project materially shaped the field and strengthened her standing as a builder of shared knowledge.
highCo-founded AI4ALL to widen access to AI education
Li co-founded AI4ALL to bring more women, Black, Latinx, Indigenous, and other underrepresented students into AI through education, mentorship, and career pathways.
→ AI4ALL became a durable institution with measurable participation and internship outcomes, giving this commitment concrete downstream effects.
highProject Maven emails created a public integrity controversy
Leaked internal emails during Google's Pentagon Project Maven backlash showed Li warning colleagues to avoid mention of AI in the contract framing, which critics read as reputation management that sat uneasily beside her public human-centered ethics language.
→ The episode did not erase her later ethics work, but it remains a real negative signal around transparency under pressure.
highCo-launched Stanford's Human-Centered AI Institute
Li helped launch HAI at Stanford to bring engineering, humanities, medicine, law, and policy into a more explicitly human-centered AI project.
→ The move strengthened her public record as an institution-builder who tied technical leadership to ethics and policy.
highUrged transparent, fair, and publicly accountable AI governance in Senate testimony
In Senate testimony, Li argued for demystifying AI, protecting privacy and fairness, improving transparent procurement, and investing more in public AI research rather than leaving advanced AI to a few firms.
→ This is strong recent evidence that her public commitments now center on public-interest guardrails rather than private hype alone.
mediumHelped launch RAISE Health around responsible AI in medicine
Li publicly positioned RAISE Health as a multi-stakeholder effort to make AI in health care more transparent, fair, and equitable, with attention to social determinants and unintended harm.
→ This extended her public-interest pattern from education into health-care equity and responsible deployment.
highPressure Tests
Behavior under crisis or scrutiny
Immigrant family hardship
1992Her family arrived in the United States with very little money, and she had to learn English while helping support the household.
Response: She kept studying, worked in restaurants and the family dry-cleaning business, and eventually earned a full scholarship to Princeton.
positiveProject Maven backlash
2018Leaked emails and employee protests put her Google Cloud role under ethical scrutiny.
Response: The episode damaged trust, but her later public work leaned more explicitly toward transparency, fairness, and human-centered guardrails.
mixedAI governance pressure
2023Rapid commercialization of generative AI raised pressure to choose between private advantage and public accountability.
Response: She publicly argued for fair procurement, privacy protections, stronger public research capacity, and multidisciplinary oversight.
positiveProgression
crisis years
Ethical pressure exposed a real weakness in transparency during the Google Cloud period.
mixedcurrent stage
Current work blends frontier AI leadership with stronger public-interest framing around medicine, fairness, and research access.
upearly years
Financial hardship and immigrant adjustment forged a durable resilience pattern.
upgrowth years
Scientific ambition widened into institution building and access work.
upBehavioral Patterns
Positive
- • Repeated effort to widen access to AI education for underrepresented students
- • Consistent public emphasis on human-centered and public-interest AI
- • Long-horizon institution building across academia, nonprofit, and startup settings
Concerns
- • The Project Maven email episode created a real gap between ethical branding and internal crisis handling
- • Belief and worship discipline are not meaningfully observable in the public record
Evidence Quality
10
Strong
2
Medium
0
Weak
Overall: strong
This profile evaluates observable public behavior and evidence, not the state of a person's soul.