AI has created a rare moment in business: one where possibility is expanding quickly, but certainty is getting harder to hold onto. With excitement and experimentation high, it’s easy to focus on tools and use cases. It’s also just as easy to overlook the deeper implications for organizations.
At the 2026 Research Symposium: Frontiers of Research on AI, Ivey faculty researchers offered a sharper, more forward-looking lens, sharing new and ongoing work that explores not only what AI can do, but what it fundamentally changes.
Below are eight research highlights from the day, each capturing a key question AI is raising for organizations, and what leaders can do about it.
1. What does it take to make AI think like your organization – not just like a machine?
The challenge: As organizations increasingly rely on AI agents to support (and sometimes shape) business decisions, Joshua Foster, Assistant Professor of Business, Economics and Public Policy, is asking: how do you ensure AI doesn’t simply optimize for efficiency or profit, but reflects what a company truly values?
The takeaway: Never assume AI will make the “best” decision on its own. Foster’s research recommends “explicit alignment,” a process of embedding clear values and stakeholder priorities (shareholders, employees, customers, and society) into AI decision-making upfront, and consistently fine-tuning models.
2. Can AI help with the emotional demands of customer interactions?
The challenge: From a Disney cast member’s constant cheer to a collections agent’s repeated firmness, emotional labour can wear people down fast. Yuqian Chang, Assistant Professor of Marketing, posits: what if AI could take on some of that burden?
The takeaway: AI can carry emotional labour, but it shouldn’t carry it alone. Chang’s research shows that AI voice chatbots can outperform top human agents at emotional delivery, particularly in high-stress, negative interactions. But results depend on getting the emotion right for the situation. The winning approach isn’t full automation, it’s smart coordination – triaging customers to AI or humans based on context and real-time responses.
3. Could AI summaries discourage online review writing?
The challenge: Online reviews shape everything from travel bookings to tech purchases. And with new AI-generated review summaries, it’s now easier than ever to get the gist – without reading every comment. But ongoing research from Zhe Zhang, Assistant Professor of Marketing, questions how this shift affects reviewers themselves.
The takeaway: Never optimize for speed without protecting participation. AI review summaries can help customers decide faster, but when key takeaways are surfaced upfront, reviewers may feel less seen and less valued. Over time, that can reduce contributions and weaken the review ecosystem itself.
4. Can AI reshape how organizations understand their workforce?
The challenge: Employee attitudes shape organizational culture and decisions. But, on hot-button issues they’re tough to anticipate – and even tougher to change. Kevin Nanakdewa, Assistant Professor of Organizational Behaviour, inquires: could machine learning help uncover hidden beliefs in the workplace?
The takeaway: Don’t assume you already understand your employees. By using machine learning to analyze hundreds of thousands of survey responses, Nanakdewa pinpointed the underlying beliefs that most strongly predict employee attitudes. His recommendation? Use AI to surface what truly drives people, then shape messaging and culture initiatives around those beliefs – rather than trying to change opinions head-on.
5. Can AI really help level the employee playing field?
The challenge: Generative AI is often framed as a democratizing tool – helping everyone think better and work smarter. But in practice, it’s uneven: with the same access, some employees gain insight and influence, while others see only modest productivity gains. Vaughan Radcliffe, Professor of Managerial Accounting and Control, asks why AI widens the gap instead of leveling the playing field.
The takeaway: Equal access doesn’t mean equal advantage. Radcliffe’s research suggests AI rewards employees who can frame problems, judge outputs, and steer decisions – not just work faster. For leaders, AI isn’t a simple tool rollout; it’s a capability shift. Build critical and interpretive skills across teams, or AI will widen gaps in influence and opportunity.
6. Are we humanizing AI and mechanizing ourselves?
The challenge: As AI gets more capable, we’re starting to talk about it like it’s human, saying it “thinks,” “feels,” or even treating it like a friend. Meanwhile, we describe people like system parts: “validation step,” or “human node.” New research from Yasser Rahrovani, Associate Professor of Information Systems, explores what happens when language casts AI as the “human expert”: who gets trusted, blamed, and empowered?
The takeaway: Language doesn’t just explain AI, it normalizes harm. When we give AI human attributes and reduce people to system’s cogs and wheels, we blur lines around competence, responsibility, and judgment. Leaders should be intentional about AI framing, or risk making distorted language justify and naturalize human replacement, a loss for any organization.
7. Can transparency make AI advice as credible as human expertise?
The challenge: In high-stakes settings like investing, people have long trusted human experts over algorithms. But as AI-generated analysis becomes more common, investors may increasingly question: was this written by a seasoned analyst, or by AI? Guneet Kaur Nagpal, Assistant Professor of Marketing, explores whether credibility hinges on the source – or just the words.
The takeaway: Transparency builds trust. In Kaur Nagpal’s study with hundreds of investors, AI-written reports weren’t automatically seen as less credible than human ones. In fact, credibility rose most when the AI’s data and logic were explained. In other words, if AI informs customer decisions, clear disclosure and explainability matter more than a human stamp of approval.
8. Is AI strengthening your organization, or destabilizing it?
The challenge: Organizations often assume AI will make work more efficient by strengthening current systems. But in practice, AI can reveal new patterns that challenge what leaders think they know – and spark conflict over what counts as the “real problem.” Mark Zbaracki, Associate Professor of Strategy, explores what happens when AI meets organizational intelligence.
The takeaway: AI isn’t just a tool, it’s a stress test for how your organization thinks. Zbaracki finds leaders often want AI to reinforce existing rules, while data scientists use it to surface new patterns and challenge assumptions. His recommendation? If you really want to unlock the full potential of AI, you should prepare your organization to take on unexpected discoveries.