Human Edge Summit asks what it means to be human in the age of AI

At RTP's Human Edge summit, 600 people probed what remains uniquely human as AI grows and how to govern it. Talks covered personhood law, healthcare limits, and actionable steps.

Categorized in: AI News Science and Research
Published on: Sep 22, 2025
Human Edge Summit asks what it means to be human in the age of AI

The Human Edge: What It Means to Be Human in the Age of AI

North Carolina's Research Triangle Park hosted a full-day summit on September 17, 2025, co-led by RTI International and Elon University. More than 600 participants joined in person or via Zoom to address a direct question: what remains uniquely human as artificial intelligences gain ground - and how do we govern, study, and deploy them responsibly?

Why this summit mattered for scientists and research leaders

The agenda pushed beyond hype. Sessions focused on education, human agency, creativity, well-being, and a shared research program for responsible AI. A roundtable of higher education leaders outlined current initiatives and research across North Carolina's campuses.

Setting the tone: principles and responsibility

Elon University President Connie Book urged the audience to face core questions about humanity's place alongside AI. She highlighted Elon's long-standing work through the Imagining the Internet Center, now the Imagining the Digital Future Center, and a global set of principles for AI in higher education developed with contributors from 48 countries and released at the United Nations Internet Governance Forum in 2023. See the IGF's work here.

She also cited "The Student Guide to Artificial Intelligence," created with the American Association of Colleges and Universities and adopted by roughly 4,000 institutions and organizations worldwide. Her message: institutions must proactively address the coevolution of humans and digital systems - and move from talk to action.

RTI International President and CEO Tim Gabel echoed the dual mandate: ambition and accountability. He cited RTI projects including public health communication tools, a new AI layer for internal research, and a "digital twin" of the U.S. population used to model disease spread and test responses. The takeaway: the technology's value depends on human choices, incentives, and governance.

Keynote: Legal personhood and the line between "someone" and "something"

James Boyle, William Neal Reynolds Professor of Law at Duke University and author of "The Line: AI and the Future of Personhood," challenged attendees to examine how law and morality respond as machine capabilities grow. Language is no longer a reliable test of sentience, he argued, because modern systems produce fluent output at scale.

Boyle outlined three themes: AI will provoke new inquiry into consciousness and human exceptionalism; law will have to revisit personhood across biological and non-biological entities; and encounters with machine intelligence will reflect our own ethical standards back to us. He noted that current systems like ChatGPT are not conscious, while acknowledging that future shifts are plausible. His closing: proceed with caution - and humility.

Keynote: AI in healthcare - progress, limits, and what actually works

Erich Huang, head of clinical informatics at Verily and chief science & innovation officer for Onduo/Verily, grounded his talk in a trauma case. Stabilizing a crash victim, placing chest tubes, rushing to surgery, and comforting a family - these are competencies models and robots do not deliver today.

His thesis: large language models identify and synthesize information, but they don't build the culture, incentives, or workflows that change clinical behavior. Electronic health record data and billing codes often mirror reimbursement priorities rather than biology, which can bias models from the start. Aligning payment with outcomes would improve data quality and model reliability.

Huang has invited technologists to do "clinical rotations" at the bedside to witness tacit practices that rarely enter charts but drive safety. He urged the field to generate higher-quality clinical data, validate models for specific jobs, and embed them in team-based workflows where humans still coordinate care and deliver hard news. The aim is practical intelligence that helps patients get better.

Public sentiment: what experts and Americans expect AI to change

Lee Rainie, director of Elon University's Imagining the Digital Future Center, summarized new survey research on how experts and the public expect AI to affect human traits. Of 12 traits studied, experts predicted negative outcomes for nine over the next decade. Creativity, curiosity, and problem-solving drew cautious optimism.

Those with higher education levels were more pessimistic than those with lower levels - a reversal of earlier technology adoption patterns. Rainie framed this moment as different from prior industrial shifts because, for the first time, people are sharing cognitive space with tools that present as intelligent.

What researchers and institutions can do now

  • Define data standards and provenance. Treat clinical and administrative data as biased until proven otherwise; audit for labeling, missingness, and incentives.
  • Link payment and outcomes. Better incentives produce better data - and better models.
  • Validate for tasks, not hype. Use pre-registration, baselines, and error analysis; monitor post-deployment drift and real-world impact.
  • Keep humans accountable. Establish oversight that protects agency, transparency, and recourse; clarify handoffs between systems and people.
  • Institutionalize interdisciplinary rotations. Put engineers in clinics, classrooms, and policy labs; bring domain experts into model design and evaluation.
  • Advance scholarship on personhood and rights. Prepare legal, ethical, and technical frameworks for systems that exhibit humanlike capacities.
  • Adopt shared principles for AI in education. Use cross-institution guidelines to align policy, pedagogy, and procurement.

Toward a shared research agenda

Across breakout sessions, attendees worked to align research goals: build trustworthy datasets, create domain-specific benchmarks, test deployment in real settings, and measure human outcomes. The conference emphasized collaboration across universities, industry, and public-interest organizations.

Support and organizers

The summit was supported by Burroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences. It was organized by Elon University's Imagining the Digital Future Center (with Lee Rainie) and RTI International's Fellows Program (with Brian Southwell) and University Collaboration Office (with Katie Bowler Young).

Extend your team's AI capability

If you are formalizing skills by role, explore curated AI learning paths for research, healthcare, data science, and more here.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)