Artificial Intelligence’s Next Steps: Promise, Pitfalls and the Unfinished Human Role
AI assists human tasks without replacing judgment, requiring ethical design and bias transparency. Future AI may foster creativity with open-ended, diverse solutions beyond fixed answers.

The Future of Artificial Intelligence: Insights from SSIE Faculty
Artificial intelligence (AI) has become a part of daily life, from streaming recommendations to automated manufacturing. The rise of generative AI—systems that create new content based on prompts—has sparked business investments and public interest. Researchers at the School of Systems Science and Industrial Engineering (SSIE) offer perspectives on AI’s current state, challenges, and future directions.
Historical Perspective on AI Progress
Carlos Gershenson-Garcia, a SUNY Empire Innovation Professor, emphasizes that AI breakthroughs often appear closer than they are. Past disappointments, such as early machine translation struggles and failed expert systems, led to funding declines and “AI winters.” Unlike previous eras dominated by industrial companies, today’s richest firms focus on information processing.
He cautions that predictions about AI replacing entire job roles are premature. Instead, AI will likely assist humans, enabling fewer people to perform tasks more efficiently, but rarely eliminating the human element entirely.
Human-Centered Design and AI Limitations
Assistant Professor Stephanie Tulk Jesso focuses on human/AI interaction, advocating for designs that respond to user needs rather than imposing unsuitable tools. She notes that, in many cases, AI introduces more noise and complexity without delivering clear benefits.
Tulk Jesso raises concerns about overhyped AI expectations, ethical issues like copyright infringement, environmental impacts, and exploitative labor conditions behind AI training. She points out AI’s unreliability in critical situations, citing examples where AI suggested dangerous or nonsensical advice.
She argues that AI should be treated like a material with known properties—tested and understood—before integrating it into important applications.
Collaborative Robotics in Industry
Associate Professor Christopher Greene studies cobots—robots designed to work alongside humans—in industrial settings advancing toward Industry 4.0. Unlike traditional robots limited to repetitive tasks, cobots can safely collaborate with human operators, enhancing accuracy and consistency.
Greene highlights projects involving automated electronic assembly and pharmacy operations, where precision is critical. Properly programmed cobots reduce human error in tasks such as pill sorting and packaging, directly impacting safety and efficiency.
Addressing AI Bias and Transparency
Associate Professor Daehan Won applies AI in manufacturing and healthcare to improve decision-making. He identifies challenges such as the “black box” nature of AI, where decision processes are opaque, limiting trust in fields like medicine.
Won stresses the importance of unbiased data inputs, noting that many AI models reflect disparities due to uneven data representation across regions and demographics. His collaborative work aims to improve inclusivity in medical research and adapt manufacturing AI for diverse operational contexts.
AI as an Aid, Not a Decision Maker
Professor Sangwon Yoon views AI as a powerful tool but insists humans must retain final authority, especially in critical areas like healthcare and military decisions. Public opinion reflects caution and skepticism toward AI’s expanding role.
Yoon’s research spans manufacturing and healthcare, where AI accelerates problem-solving but cannot replace human judgment. Algorithms assist in diagnosis and process control, but medical professionals still interpret and act on AI input.
Beyond Single Solutions: Embracing Open-Ended AI
Distinguished Professor Hiroki Sayama studies artificial life, aiming to mimic living systems’ adaptive and exploratory behaviors. Unlike AI systems designed to find one “right answer” quickly, biological systems explore multiple possibilities continuously.
Sayama highlights the concept of “open-endedness,” where AI generates diverse, novel solutions without fixed goals. He warns that reliance on similar AI tools risks homogenizing outputs, reducing creativity and diversity.
He envisions future AI that can facilitate human discussions, translate between sensory formats, and simplify complex ideas—beyond generating text or images.
Practical Takeaways for Science and Research Professionals
- AI remains a tool that aids human expertise rather than replacing it.
- Design AI systems with clear understanding of user needs and context.
- Address data bias and demand transparency to build trust in AI applications.
- Explore AI’s potential beyond fixed solutions by encouraging diverse, open-ended approaches.
- Consider ethical, environmental, and labor implications in AI development and deployment.
For professionals interested in deepening their AI knowledge and skills, exploring targeted educational resources can be valuable. Platforms offering specialized courses in AI applications across industries may provide practical guidance and certification opportunities. For more information, visit Complete AI Training.