Artificial intelligence is no longer just a background tool—it’s becoming a prominent partner in our work and education. In an era where generative AI tools like ChatGPT, Claude, and Copilot are reshaping how we work, think, and learn, researchers at Hult International Business School are rethinking what it means to conduct relevant, applied research, particularly within the context of executive doctoral education, where students straddle the worlds of academic scholarship and executive practice. This shift in paradigms aims to challenge traditional boundaries between human and machine, academia and industry, theory and practice. What if AI isn’t just a tool in human-machine collaboration? What if AI evolves from an assistant to a co-scholar? This is the central idea behind human-AI co-scholarship.
From tools to epistemic contributors
Historically, new technologies in education have always been the subject of initial resistance, from calculators to word processors and even digital libraries. Generative AI tools represent the new wave of disruption and impact, challenging how we articulate the world around us. However, unlike previous technologies, these tools don’t just support tasks, they actively influence how we think through processes, by suggesting framings, surfacing patterns and modeling reasoning.
AI: A bridging tool between theory and practice
Within executive doctorates such as the Doctor of Business Administration (DBA), this influence matters. Students within these programs represent a unique subset of the education population—senior business managers, c-suite executives, board advisors and government officials experienced in tackling real-world problems of practice. These students often view executive doctorates not only as a new level of achievement in their quest for professional reputation and personal fulfilment, but to learn new and systematic ways of solving problems for impact. Hence, their professional experiences and scholarly journeys are particularly crucial in bridging the infamous gap between academia and industry, long-touted by scholars and executives alike. Therefore, they need research methods that can bridge theory and practice. And this is where AI can help.
Yet many institutions still discourage or restrict AI use under traditional notions of rigor and originality, which can result in continued tensions between academic conventions and professional reality and further exacerbating the crisis of relevance within academia. Rather than asking whether AI should be used in research, we should be asking how researchers can engage AI in rigorous, reflexive, and ethically sound ways to solve real-world problems. Cue Human-AI Co-scholarship.
Creating a Human-AI co-scholarship framework
The Human-AI co-scholarship framework was built on two key theories: Reflexivity and Socio-materiality. The former encourages researchers to critically examine how AI influences their assumptions, decisions, interactions, and interpretations. The latter posits AI as a nonhuman actor embedded within the research process, shaping inquiry. Using this foundation, three interconnected domains were developed:
- Dialogic Engagement: Iterative exchanges between the researcher and AI as a cognitive mirror and provocateur, shaping inquiry
- Epistemic Framing: Recognizing the role of beliefs and institutional norms regarding legitimacy, originality and integrity shape if, how, and to what extent AI is used (restricted) in research
- Knowledge Co-production: exploring how AI influences not just the research process but the outcome, which can in turn create insights across theory and practice
At their intersection lies human-directed reflexivity, which emphasizes that while AI contributes epistemically to knowledge creation, the human researcher remains the scholarly authority and director who must maintain continuous self–awareness of their scholarly relationship with AI and how it shapes their inquiry. The framework yields seven propositions that explore how AI influences the research process, from choice of topic to ethics. AI isn’t a co-author or scholar, but it can be a co-actor in sense-making. Thus, with human agency and authority, human-AI interaction can constitute scholarship.
AI isn’t a co-author or scholar, but it can be a co-actor in sense-making.
Dr. Kate Abraham, Assistant Dean of the DBA Program at Hult International Business School
Why this matters
By recognizing AI as an epistemic actor, we can move beyond binary arguments of AI usage and simplistic debates about plagiarism and productivity. The use of AI in education and practice is nuanced and therefore, calls for richer inquiry. The question isn’t whether AI should be used or not. Rather, how does AI change what we know about rigorous research? How do scholars maintain authority while engaging with nonhuman actors? In the context of executive doctoral education, what does human-AI knowledge co-creation entail and how might it transform applied research practices? And, arguably, the most important question of all: how might ethical and improved human-AI interaction itself lead to better AI systems in the future?
This framing has profound implications for academia and doctoral education, specifically:
- Supervision: Beyond policing its use, mentors play a key role in helping students navigate AI usage and critically evaluate how AI contributes to, constrains, or provokes their research decisions and actions
- Curriculum: Courses must embed AI literacy, epistemic awareness and reflexivity as core research skills that cultivate awareness of AI’s influence on logic, interpretation, and scholarly voiceInstitutional policy:
- Institutions need nuanced guidelines that move beyond binary “AI allowed vs. Not allowed” and reflect the realities of human-AI collaboration, including reflexive disclosures about AI’s influence, clarity on authorship vs epistemic contributions and open dialogue between students and institutional leaders
AI adoption is crucial for remaining relevant
These implications are particularly salient for executive doctoral students who are navigating dual and sometimes conflicting identities of what it means to be both a researcher and a practitioner in an era where AI is reshaping both scholarly and industry labor. Executive doctoral programs are uniquely positioned to lead this shift at the intersection of theory, practice, and technology. Human-AI co-scholarship challenges us to see knowledge creation as an entangled process where reflexive humans and generative technologies work together. This could help executive programs not only stay relevant but also pioneer new forms of research for real-world impact.
Looking ahead: From Malibu to Methodology
Dr. Kate Abraham will be presenting the full paper at the Engaged Management Scholarship (EMS) Conference hosted by the Executive DBA Council (EDBAC) this September. It promises to be a foundational step in shaping how executive doctoral programs and higher education at large can respond to the AI era.
Interested to learn more?
You can contact the authors at:
- Kate Abraham: kate.abraham@hult.edu
- Steph Sharma: stephanie.a.sharma@gmail.com
- Patrick Kincaid: pkincaid@gmail.com

