AI, Law, and Human Judgment: Key Takeaways from Hassan I University's International Conference
At Hassan I University in Settat on December 4-5, academics, judges, and security experts from 14 countries unpacked how artificial intelligence should enter courts, classrooms, and legal practice. Organized by the Laboratory for Research on Compared Democratic Transition (LRTDC), the Faculty of Law and Political Sciences with AIRA, and the Faculty of Sciences and Techniques, the event centered on a simple idea: AI can amplify the justice system, but it cannot replace human intuition or conscience. Speakers pressed for clear rules that keep technology a servant of justice-transparent, accountable, and respectful of digital rights.
Why this matters now
AI is already writing drafts, surfacing case patterns, and flagging risks. But judgment-context, empathy, proportionality-remains human. The conference consensus: build ethical and legal guardrails now so AI supports due process instead of distorting it.
Morocco's push in digital and legal education
Abdellatif Moukrim, President of Hassan I University, announced government approval for a university institution focused on AI and digital sciences. The goal is to develop a national knowledge base for the justice sector and protection of digital rights, and to train experts who can integrate AI into practice, not just write code.
Hasna Keji, Dean of the Faculty of Legal and Political Sciences, detailed master's programs in Governance and Cybersecurity and in Digitalization and Documentation Techniques. The aim is to graduate "digital legal professionals" who can handle cybercrime and digital transactions with both legal rigor and technical skill. Specialized instructors bridge theory with field realities.
Professor Abdeljabbar Arrach, Director of the LRTDC, argued AI creates a cognitive break with traditional legal education. He called for interdisciplinary training across law, political science, sociology, and technology-and for ethics to anchor that shift. Students, he said, are the center of this change and must keep their human and professional sensitivity.
Security and profession: tools, not shortcuts
Major General Mohsen Boukhabza, Head of the Central Judicial Police Division of the Royal Gendarmerie, outlined Morocco's upgrades to information systems and training to counter cybercrime. He was explicit: AI should strengthen judicial security-not enable arbitrary surveillance.
The Moroccan Association of Notaries sees AI as a way to speed documentation with respect for legal and historical integrity. The National Body of Judicial Officers added that technology cannot replace professional conscience and field experience. Their message: partner with universities to train tech-proficient professionals who keep their human edge.
History and practical applications
Mohamed Idrissi El Alami Mechichi noted Morocco's early use of AI in text processing in the 1970s-80s. The private sector led, followed by gradual adoption in public institutions and security agencies, with a consistent warning: protect privacy and defend against cybercrime and data manipulation.
Abdelkarim Amkany highlighted that while definitions of AI are still debated, current applications-like large-scale text generators since 2017-have clear limits compared to human cognition. He urged national strategies, laws, policies, continuous monitoring, and specialist training to manage real-world use.
Tahar Mohamed El Sayed Mohamed Abou El Walid argued AI should enter legal curricula from the first undergraduate years to prepare future judges and lawyers. Full reliance on AI for rulings is not viable due to data quality issues and unstable databases.
Mohamed Saleh Abeih underscored that AI cannot replace discretionary judgment. He pointed to data protection and intellectual property as ongoing challenges and referenced the European Court of Justice's "Google Spain" ruling on the right to request removal of personal data from search engines. Read the case.
Philosophy, international law, and the black box problem
Professor Forrest Katherine Bolan explored whether "superintelligent AI" could act like a virtual judge or lawyer, forcing a rethink of the social contract if such systems become embedded in governance. She warned against blind trust, citing errors in British courts where AI provided incorrect legal information, and called for strict professional safeguards and accountability for misuse.
Professor Abdel Nasser Jehani clarified that AI systems, despite significant functional capabilities, lack the legal personality and ethical sovereignty of states and international organizations. Under international law, AI remains a tool under human responsibility-no "machine error" defense for escaping liability.
Judge Dragos Calin emphasized that judging is not computational. It blends text, context, emotion, and fairness. AI can streamline procedures and verify information, but it should never take the judge's seat.
Professor Mohamed Saidi warned about opaque algorithms and biased data. He urged comprehensive regulatory frameworks inspired by European legislation and international technical standards to secure transparency and fairness. For a reference point, see the evolving EU AI Act.
What to do next: a practical checklist
- Universities: Embed AI, data ethics, and cybersecurity from year one; require hands-on labs with real case materials; co-teach with engineers and legal practitioners.
- Courts and prosecution: Use AI for triage, search, and verification; mandate human review for all outputs; log model versions, prompts, and sources for auditability.
- Professional bodies: Issue model usage policies; require disclosure when AI assists work; enforce sanctions for unverified or misleading AI outputs.
- Policy makers: Enact clear data protection rules, audit rights, and explainability standards; require high data quality and bias testing for any system used in justice.
- Vendors and IT: Prefer interpretable models where feasible; document training data provenance; provide monitoring and fallback procedures for system failures.
- Education and training: Continuous upskilling for judges, lawyers, and clerks; scenario-based exercises on AI errors, privacy breaches, and digital evidence.
Upskilling resources
If you're building AI capability for legal education or court administration, curated course libraries can accelerate staff training. Explore role-focused options at Complete AI Training - Courses by Job or browse the latest AI courses to keep programs current.
Bottom line
Integrating AI across legal and judicial education is now essential. The guardrails are equally essential: protect personal data, guarantee transparency, hold humans accountable, and keep judgment in human hands. The future of legal work will depend on how well institutions adapt technology to serve justice-consciously, carefully, and with clear rules.
Your membership also unlocks: