Anthropic AI hackathon at U of T spotlights autism support prototypes and a solo end-to-end build in 48 hours

At UofT's Anthropic AI hackathon, teams shipped working prototypes in under 48 hours. Highlights: an ABA risk-to-strategy tool and a solo-built system that placed second.

Categorized in: AI News IT and Development
Published on: Dec 02, 2025
Anthropic AI hackathon at U of T spotlights autism support prototypes and a solo end-to-end build in 48 hours

Anthropic AI Hackathon at UofT highlights practical builds for behavioral support and solo development

The University of Toronto hosted an Anthropic AI Hackathon over the weekend, where teams and solo builders shipped working prototypes in under 48 hours. Many leaned on Claude for reasoning and orchestration alongside standard ML stacks. Posts from participants surfaced two clear themes: behavioral support tools and end-to-end systems built by a single developer.

Anthropic is known for the Claude family of models used for analysis, reasoning, and agentic workflows. Learn more about Claude here.

ABA Forecast: context-driven risk estimation plus LLM guidance

ABA therapist and data analyst Waseh Niazi and three collaborators built ABA Forecast, a prototype that tests whether behavioral risk can be estimated from context common in applied behavior analysis sessions. The team trained a Random Forest Classifier on a mix of synthetic data and openly available datasets, then used Claude to translate predictions into structured strategy suggestions mapped to ABA routines.

Key variables included:

  • Sleep quality and time of day
  • Transitions and social context
  • Toileting patterns
  • Environmental conditions via a weather API

In practical terms, the pipeline looked like this: feature assembly from session context and APIs, model training and evaluation, risk scoring, and an LLM layer that outputs clear strategies for the next session. It's an early-stage build, not a finished clinical tool, but it points to a useful pattern for pairing classical ML with an LLM for explainable, actionable output.

If you're exploring similar modeling, the RandomForestClassifier docs are a solid starting point: scikit-learn.

Solo submission places second: end-to-end under pressure

Full stack developer and data analyst Issa Al Rawwash shared that he placed second with a solo build. "Every decision was mine. Every line of code was mine. Every pivot happened in real-time." He described the pace and pressure as an "ultimate test," and presenting a complete system to judges as a high point in his trajectory.

For developers, this signals a workable bar: one person, a tight scope, and a clear narrative from problem to demo. Architecture, data, API integration, and presentation-owned front to back.

Practical takeaways for engineers

  • Pair a lightweight classifier with an LLM layer to turn raw scores into structured, human-readable recommendations.
  • Context features matter. Pull signals from session notes, time, sleep, transitions, and environment (e.g., weather) to improve model signal.
  • Time-boxed builds favor simple, inspectable models and small, well-defined prompts with schema-constrained outputs.
  • Be explicit about scope. Prototypes in health or education contexts should avoid clinical claims and document assumptions.
  • Log everything: inputs, feature versions, model hashes, prompts, and outputs for fast iteration and auditability.

Build notes if you want to replicate the pattern

  • Data: Start with synthetic data to define labels and edge cases; add open datasets where appropriate. Validate with domain experts.
  • Model: Baseline with RandomForestClassifier; track metrics that map to decisions (precision/recall at action thresholds).
  • Context: Enrich with a weather API and structured session metadata; keep a clear feature registry.
  • LLM layer: Use Claude to generate strategy suggestions as JSON-first outputs, then render for clinicians/educators.
  • Guardrails: Add prompt-level constraints, PII redaction, and human-in-the-loop review for any sensitive recommendation.

The hackathon showed range-from a behavioral support prototype that transforms context into action, to a solo-engineered system built and presented under strict time constraints. Niazi invited others in ABA, autism services, health tech, and edtech to connect and explore similar questions.

If you're building with Claude and want a structured path to level up, explore this resource: Claude Certification.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide