Metrolinx Admits to Using AI in GO Transit X Account Responses
Metrolinx recently confirmed that the GO Transit X (formerly Twitter) account used artificial intelligence to assist with customer responses. The agency promised this wouldn't happen again after some replies showed clear signs of AI-generated content.
Before July 5, GO Transit’s replies were brief and ended with the initials of the staff member responding. Lately, responses have lost those personal touches and include typical AI markers like em dashes (—), emojis, a peppy tone, and repeated information reflecting the user’s question.
Examples of AI-Like Responses
Liberal MPP Rob Cerjanec highlighted one response to a rider who rushed to catch the last train after a Coldplay concert in Toronto. GO Transit replied, “Sounds like Ange had a dramatic dash to catch that last northbound GO train at 11:13 p.m. That’s cutting it close!” They also added, “You go visit our website” to check schedules. This tweet was later deleted.
In another instance, GO Transit responded to a suggestion about expanding service to Alberta and Vancouver. The reply was positive but vague, thanking the user for the “thoughtful suggestion” and expressing shared enthusiasm, despite GO Transit’s operations being limited to southern Ontario since the 1960s. This reply included emojis and phrasing flagged by AI detection tools.
Signs of AI Use and Errors
- Inconsistent capitalization of “GO Transit” (e.g., “Go transit” or “Go Transit”) appeared in multiple posts after July 5, which wasn’t seen before.
- One reply mentioned plans to electrify trains with upgrades scheduled through “20321,” an obvious typo that was later removed.
- Several posts have since been deleted, suggesting some content was recognized as inappropriate or inaccurate.
Metrolinx’s Response
Metrolinx confirmed that AI was used “in a reply” by a contact centre vendor supporting the GO Transit X account. While Metrolinx employees run their social media accounts, this vendor handled the GO Transit account and drafted replies with AI assistance, which Metrolinx said was inappropriate.
Spokesperson Andrea Ernesaks stated the replies are typed by humans, not AI bots. However, the vendor used AI to draft a response, violating Metrolinx’s customer support standards. The agency has since instructed the vendor to stop using AI entirely and apologized for any confusion.
Details about the vendor’s identity, the review process for posts before publishing, or the total number of AI-assisted replies were not disclosed.
Expert Insight on AI in Customer Support
Maura Grossman, a computer science professor specializing in AI, noted GO Transit isn’t the only organization using AI in customer interactions. She pointed out that chatbots often generate fluent but inaccurate information because they predict likely word sequences rather than understand facts.
Grossman explained that AI chatbots perform well on routine queries like schedules but struggle with unusual or complex requests. Without human oversight, errors like incorrect facts or irrelevant replies are almost certain.
Key Takeaways for Customer Support Professionals
- Human oversight remains critical. AI can speed up responses but needs review to avoid misinformation.
- Transparency matters. Customers expect honesty about when AI is used.
- Training and clear guidelines for vendors are essential. Metrolinx’s experience shows the risk when external partners handle AI-based support without strict rules.
- Be cautious with tone and content. AI can produce unnatural language or repeated info that may frustrate users.
As AI tools become more common in customer support, balancing automation with quality and accuracy is crucial. For those interested in learning how to safely and effectively use AI in support roles, resources like Complete AI Training's courses for customer support professionals offer practical guidance and strategies.
Your membership also unlocks: