Rote Learning Meets ChatGPT: Why Bangladesh's Curriculum Isn't Ready

AI is boosting polish but blurring thinking and trust in class. Schools must set clear policies, teach AI literacy, and redesign assessments to prize process and hands-on work.

Categorized in: AI News Education
Published on: Jan 23, 2026
Rote Learning Meets ChatGPT: Why Bangladesh's Curriculum Isn't Ready

AI Is Outpacing the Curriculum. Here's How Education Can Catch Up

January 22, 2026 - Dhaka. In just a few years, student writing has gone from messy drafts to uniformly polished submissions. That polish often signals quiet support from generative tools. The result: smoother prose, weaker thinking.

AI can speed up learning tasks. Used uncritically, it also chips away at originality, judgment, and trust in the classroom. The question isn't "Should students use AI?" The question is "How do we integrate it without hollowing out education?"

What educators are seeing

At the school level, process is losing to product. Mahfuzul Haque Sadim Chowdhury, assistant teacher at Milestone School and College, puts it bluntly: "School students are stuck at rote learning, and that makes them more vulnerable to misuse of AI… teachers are not trained to incorporate AI in the learning process, and often their abysmal salary structure does not even incentivise learning and receiving training."

Even strong curricula don't fully shield against shallow use. "The Cambridge syllabus… pushes students to personally engage with the materials," says Tashfia Ahmed of Scholastica. "Earlier, students used to develop synthesising skills by collecting information from different sources and evaluating that information, but now that is being outsourced to AI."

At universities, trust is strained and policy is thin. "Sometimes when I find a student's writing quality has dramatically improved… a sense of distrust does loom over," notes Dr Nurul Huda Abul Monsur, professor of history. "If a student is only using it to develop their vocabulary, these AI tools can be a friend… I have hardly any tools to be precisely sure if cheating has occurred or not."

Across disciplines, the gap between course design and real practice is visible. "University curriculum is designed to provide a strong theoretical foundation… while industry increasingly expects practical AI expertise," says Muhammad Shafayat Oshman, lecturer in CSE at North South University. He points to areas like semiconductor and chip design that remain "largely theoretical" in classrooms. The call is clear: bridge theory with hands-on work.

System leadership is lagging, too. "The Bangladeshi education system needs to adopt AI according to discipline and come out of the preconceived notion that AI is merely a cheating tool," argues Dr Mohammod Moninoor Roshid of Dhaka University's IER. He urges the University Grants Commission to set direction and help senior academics engage with AI rather than fear it.

The core issue

Generative tools often produce fluent but unreliable outputs. Without a clear framework, students default to speed over sense, and teachers default to suspicion over support. The system feels reactive, not adaptive.

A practical playbook for schools and universities

  • Publish a clear AI use policy. Define allowed, supported, and prohibited uses with examples. Require disclosure of any AI assistance, including prompts and tools used. Set citation rules for AI-generated text, images, and code. Align this at department or program level to reduce ambiguity. For reference, see guidance from UNESCO.
  • Redesign assessment to value thinking over polish. Increase in-class writing, oral defenses, whiteboard problem-solving, and iterative drafts with feedback. Ask for version history, planning notes, and annotated sources. Permit AI in defined ways (e.g., idea generation, editing), but require students to submit AI outputs and reasoning as an appendix.
  • Teach AI literacy, not just AI tools. Focus on evaluation: bias, factual errors, missing citations, and overconfident text. Train students to verify claims with primary or reputable sources and to document their verification steps.
  • Invest in teacher capacity. Offer paid time for learning, micro-credentials, and peer practice groups. Start short, recurring workshops on prompt critique, classroom use-cases, and assessment redesign. For structured options, explore job-focused pathways at Complete AI Training.
  • Make curricula hands-on. Replace rote tasks with projects that use data, experimentation, and reflection. In CSE, add labs for model deployment, hardware-aware AI, and collaboration with local industry on chip design and embedded systems. In humanities and social sciences, emphasize argument quality, source triangulation, and ethical citation.
  • Don't rely on AI detectors. Use them, if at all, as one signal among many. Prioritise design choices that reduce misuse: oral check-ins, scaffolded drafts, and authentic tasks connected to lived context or local data.
  • Protect privacy and integrity. Avoid uploading sensitive student work to public tools. Prefer institutionally managed accounts, clear data policies, and opt-in consent. Teach students how to redact and how to check tool settings.
  • Address equity. Provide device access, on-campus AI labs, and low-bandwidth alternatives. Publish the approved tool list and access instructions in one place.

A simple policy template you can adapt

  • Purpose: Support learning while preserving academic integrity and privacy.
  • Allowed uses: Brainstorming, outlining, language editing, code comments, and study planning with disclosure.
  • Prohibited uses: Submitting AI text or code as one's own final work unless explicitly permitted; fabricating citations or data; uploading confidential materials to public tools.
  • Disclosure: Students include a short AI usage note listing tools, prompts, and where AI influenced the work.
  • Verification: Require drafts, notes, version history, and brief oral explanations for complex work.
  • Consequences: Outline proportional responses tied to intent and transparency.
  • Support: Provide workshop times, office hours, and a quick-reference guide for approved tools.

Implementation timeline

  • Next 30 days: Publish a faculty-approved AI statement; pilot disclosure notes in 3-5 courses; run one staff workshop; add a short AI literacy module to writing and lab courses.
  • Next 90 days: Redesign two major assessments per program; set up an AI teaching community of practice; create an approved-tools list and privacy guidance.
  • 6-12 months: Update program learning outcomes to include AI literacy; add hands-on industry projects; budget for teacher training time and course release.

Bottom line

AI can make study more efficient. Used without guidance, it erodes originality, critical thinking, and trust. The move now is to be explicit: teach the process, define the boundaries, and make students show their work.

This isn't about banning tools. It's about building a curriculum that prizes process over product, critical engagement over convenience, and ethical use over avoidance. Set the framework, train your staff, and let assessments do the heavy lifting.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide