AI, Coding Skills, and the Practice Gap: Sloan Funds Pilot Study
Dec. 24, 2025
Distinguished Professor of Information Science Kevin Crowston has received a $50,000 grant from the Alfred P. Sloan Foundation to pilot research on a simple question with big stakes: Do generative AI tools help developers learn to code, or do they erode core skills by taking over the practice?
"Generative AI is expected to change many different kinds of work, but it's already having an impact on coding, where it's particularly useful," Crowston says. He points to a 2024 estimate from Google's CEO that roughly a quarter of the company's code was written with AI assistance-evidence that habits are already shifting.
The grant kickstarts year one of a three-year proposal to the National Science Foundation. The team includes professor of practice Michael Fudge; Francesco Bolici, associate professor at the University of Cassino and Southern Lazio; doctoral students Akit Kumar (Syracuse) and Alberto Varone (Italy); and undergraduate researcher Cassandra Rivera '27. "It gives us external validation that our project is addressing an interesting and important idea," Crowston says.
Study 1: How Students Learn (or Don't) With AI
The first study tracks how undergraduates in an intro Python course use generative AI. The core hypothesis: if students let the tool do the assignment, they get the grade but not the skill. If they query the model line by line and verify logic, they learn more.
Motivation matters. Students with genuine interest may use AI to probe and clarify. Students under time pressure or taking the course as a checkbox may offload the work and stall their growth.
Crowston adds a practical shift to watch: "Maybe the days of coding each for loop are behind us. Maybe the real skill is learning how to convey what you want to the AI-and to check that it did it correctly." The study will map how these new AI-facing skills interact with traditional programming competencies.
Study 2: Experienced Programmers in Scientific Software
The second study will interview 40 professionals who write software for scientific research. The goals: document how they use generative AI, what value they see, and where they worry about long-term effects on their own abilities.
Risk is higher in niche domains. General-purpose models have seen lots of public Python, but far less astrophysics pipelines, bioinformatics workflows, or code that simulates black hole collisions. "You could imagine the model producing code that looks plausible but isn't scientifically accurate," Crowston says.
Experienced developers tend to be highly cautious-"they're really, really worried about it"-while newer programmers may not have the same skepticism. The bigger question touches every field: What happens to expertise when AI takes over routine tasks and entry-level opportunities shrink? As Crowston notes, if AI absorbs junior work, two years later you have fewer people with two years of experience.
Why This Matters for Research Teams
If AI reduces repetition, practice decreases. Less practice means slower growth in debugging intuition, code review judgment, and domain-specific modeling. That's the expertise you rely on when results must be defensible.
The outcome of this study will inform instructors, lab managers, and research software engineers on where to set guardrails, how to teach verification, and how to maintain a healthy pipeline from novice to expert.
Practical Moves You Can Make Now
- Define acceptable AI use by task: ideation, scaffolding, refactoring, or tests-then require human verification for domain logic.
- Adopt "explain-your-code" prompts: every AI-assisted snippet must come with a short rationale and assumptions.
- Pair AI with testing: enforce unit tests, property-based tests, and checks against known scientific baselines.
- Track exposure, not just output: log where AI contributed, then review those hotspots in code reviews.
- Protect learning time: for student and junior roles, use progressive hints instead of full solutions; rotate "no-AI sprints" to maintain fundamentals.
- Teach specification writing: clear problem statements, constraints, and edge cases improve both AI outputs and team communication.
- Build skepticism into workflow: require secondary validation for results produced from AI-generated analytical code.
Resources
Learn more about the funder: Alfred P. Sloan Foundation.
Upskill your team's coding and AI practice: AI Certification for Coding.
This project will test a hard truth many teams feel: AI can speed you up, but only if you keep the human skill stack intact-specification, verification, and scientific judgment. The research aims to show where AI helps learning, where it hurts, and how to design for both productivity and competence over time.
Your membership also unlocks: