Sure, you’re using AI. Most students do. What may surprise you is despite the overall use, nearly half of all students do not feel ready for an AI-enabled job or career. They are in school right now wondering whether they are learning the skills companies now expect.

It is good to have doubts, and you are not alone. Recent surveys show that 57% of teens believe AI has negatively impacted their career outlook, driven by serious concerns about job replacement and falling behind (Junior Achievement & Citizens, 2025; Digital Education Council, 2024).

Work is changing quickly, and employers are raising expectations for how graduates think, decide, and lead alongside AI. Salesforce CEO Marc Benioff highlighted: the next generation of leaders will not manage people alone, but mixed teams of humans and AI (Salesforce World Tour keynote, 2023). Graduates are no longer hired simply to execute tasks. They are increasingly expected to exercise judgment, guide intelligent systems, and take responsibility for outcomes in environments where AI is always present (WEF, Future of Jobs Report, 2025).

Entering that AI-enabled workplace can feel like being asked to fly a commercial jet after only playing a flight simulator. You might be familiar with the controls of the technology, but without deeper training and an understanding of the broader system, it is natural to worry about making costly mistakes. Familiarity with AI does not automatically translate into AI readiness (DEC Global AI Student Survey, 2024).

“Entering that AI-enabled workplace can feel like being asked to fly a commercial jet after only playing a flight simulator.”

 

Imagine Alex, a student who uses AI every day. Alex moves faster, generates more ideas, and feels productive at the same time. But when asked to explain a recommendation, critique an idea, or make a judgment call without AI, confidence drops. That moment of hesitation is becoming increasingly common. It reveals a deeper issue: speed without understanding does not build professional confidence.

If you have experienced moments like that, you might know the feeling. Being AI-ready is less about knowing the latest tools and more about knowing yourself. It starts with AI literacy–understanding when and how AI influences decisions, and where its limits require human judgment. Just as importantly, it involves developing the capabilities AI cannot easily replace. At Hult, we focus on what we call the 5C’s: creativity, curiosity, critical thinking, collaboration, and communication. These are the durable, adaptive, human-centered skills that define professional competencies in an AI-enabled world.

As AI takes on more tasks, jobs are shifting from doing the work to guiding it. Compared to previous generations, today’s graduates are expected to supervise intelligent systems rather than compete with them. This involves redefining goals, strengthening governance and oversight, and ensuring human accountability as AI becomes embedded in organizational decision-making (McKinsey, The State of AI, 2023). The real divide is no longer between people who use AI and those who do not, but between those who rely on AI blindly and those who work with it intentionally.

This is also where Design Thinking becomes a critical bridge between AI use and AI readiness. Design Thinking provides a structured way to develop the 5C’s in practice by emphasizing empathy, problem framing, experimentation, and reflection. It strengthens collaboration across perspectives and critical thinking in decision-making, ensuring that AI use remains grounded in your judgment. When applied to Human-Centered AI, whether students are designing and developing AI systems, configuring agentic workflows, or using AI tools in everyday work, Design Thinking ensures that technology remains aligned with human judgment, ethical considerations, and real-world context (Böckle et al., 2025).

Creativity, in this setting, is no longer about producing ideas faster. AI already does that well. Creativity now lies in framing the right problems, deciding what is worth solving, and shaping direction when options are abundant (Verganti et al., 2020). Curiosity shifts from information gathering to inquiry, asking “what if” and “why not” as AI accelerates exploration. Critical thinking becomes an essentialquality control layer evaluating outputs, challenging assumptions, and ensuring sound decisions. Collaboration becomes vital as students work across perspectives, align stakeholders, and build shared understanding. Communication ties everything together, ensuring ideas are understood, challenged, and improved (Shneiderman, Human-Centered AI, 2020), translating intent into action across humans and intelligent systems (Amershi et al., Human-AI Interaction Guidelines, 2019). This work builds on human judgment, empathy, and the ability to act under uncertainty, shaping how better decisions are made.

When learning environments are intentionally designed around these principles, students do not simply consume AI outputs. They learn how to supervise AI. They practice setting success criteria, defining constraints, questioning results, and refining decisions (DEC, 2024). For example, when students use AI-powered personas in market research, the learning does not come from the persona’s answers alone. It comes from deciding which questions to ask and how to interpret responses in context. When AI accelerates data analysis, insight emerges not from the query itself, but from curiosity-driven exploration and critical evaluation of patterns (Böckle et al., 2025).

The same applies to programming, product development, and marketing strategy. AI can write code, generate prototypes, and produce campaign copy in seconds. But students must still verify logic, assess feasibility, and take responsibility for outcomes. Design Thinking reinforces this by rewarding explanation, iteration, and accountability rather than speed alone. It prevents students from becoming what some educators call “perpetual novices”—highly productive on the surface, but dependent on AI for reasoning and judgment

Over time, this creates a virtuous learning loop. Students teach AI how to operate within a specific context. AI surfaces insights that expand understanding. Students reflect, explain, and refine their thinking. Confidence grows not from dependency on tools, but from mastery of judgment (Shneiderman, Human-Centered AI, 2020).

The difference between AI-ready professionals and perpetual novices becomes clear when looking at future roles. In product management, one ships what AI suggests, while the other uses AI to explore multiple directions and then applies human insight to choose wisely. In data analysis, one reports whatever numbers appear, while the other frames the right questions and connects insights to strategy. In marketing, one launches AI-generated content, while the other supervises autonomous agents with clear goals, ethical awareness, and brand intent (McKinsey, 2023; World Economic Forum, 2023).

By 2030, AI in business will be ubiquitous. Everyone will have access to powerful tools. Tool access will no longer be the differentiator. The differentiator will be whether students used this moment to develop the 5 C’s through intentional, applied learning (IDC AIED Market Forecast, 36% CAGR).

Confidence in an AI-enabled career does not come from knowing every feature of every tool. It comes from knowing when to rely on AI, when to challenge it, and when human judgment must lead. That confidence is built in the classroom, through learning experiences that combine AI literacy, the 5 C’s enabled through the application of Design Thinking in real-world business contexts.

“Confidence in an AI-enabled career does not come from knowing every feature of every tool. It comes from knowing when to rely on AI, when to challenge it, and when human judgment must lead.”

 

We see this every semester in our Business Challenge courses that place you in a live consulting engagement. Students step into live consulting engagements with real companies, real problems, and real consequences.

At our Boston campus, a recent challenge focused on applying AI to Life Sciences including pharma, biotech, and diagnostic medical devices. Students were asked: How would AI-driven innovation improve life-saving therapies, health, and wellness? Students had one week before facing a live executive audience. Design Thinking would be their guide.

Faced with that problem, students dive in. The room is loud in the way most workplaces are loud. Teams huddle around whiteboards sketching a patient journey. Others scroll through industry research and voice challenges that were unknown just an hour ago. They start empathizing with patients and caregivers. Students have to listen, they have to debate, they have to reframe.

They are demonstrating the 5Cs well before the pitch is written. That’s creativity in action. Design Thinking teaches them to start with the human experience, rather than the technology. So they begin with empathy. Why do patients stop taking medication? Why do caregivers burn out trying to help? Why do systems designed to heal sometimes fail the people they serve?

One team looked at Alzheimer’s care and saw something many miss. They noticed a coordination-of-care failure across caregivers, clinicians, and families. Each had incomplete information, making independent decisions, not always best for the patient. They mapped the stakeholders, found pain points and asked what each person needed.

Only then did AI enter the room.

We see students build working prototypes with AI tools. In this case critical thinking pointed towards a multi-stakeholder web platform with role-specific dashboards, real-time alerts, and longitudinal digital biomarkers. But they kept questioning to explore ambiguity and make decisions.

Why are treatment plans so difficult to follow? What does AI accuracy mean to an 80-year-old with declining cognition and a daughter managing care from three states away? How do you frame information so that well-being stays front and center? Curiosity and critical thinking drove those questions. The answers shaped every design decision.

As the challenge progresses, what began as exploration becomes strategy. We see them wrestle with questions that have no single right answer, only defensible ones. Students move from ideation to implementation. They use AI to simulate patient personas, set ethical boundaries, and stress-test financial models. AI accelerates exploration, amplifying their best ideas.

This is where collaboration shows up.

By the time pitch day arrives, anticipation is infectious. The students stand in front of executives from Pfizer, Bristol Myers Squibb, and Elsevier. They present ideas and a line of reasoning demonstrating their AI literacy. Their solution isn’t just an AI app. It was a multi-stakeholder platform with role-specific dashboards, real-time alerts, and longitudinal digital biomarkers.

Then the questions come. This is the moment that matters. The kind students will face in every job interview, every strategy meeting, every decision where AI is present but accountability remains their own. Executives are testing student judgment in real time. Can you defend your strategy? Can you share where AI’s limits are? Can you explain why you said no to certain features even though the technology could deliver them?

The students don’t flinch. They answer with specifics. They acknowledge uncertainty. They explain trade-offs. What we see in that moment isn’t confidence from knowing everything. It’s confidence from knowing how to think through what they don’t yet know. More than pitching an idea, they are sharing a line of reasoning. They’re explaining why AI belongs here and why it does not belong there. They’ve measured the AI risks and describe how humans stay in the decision making loop.

That is Hult’s 5Cs in action: creative, curiosity-driven exploration, thoughtful critical thinking, timely communication and collaboration, strengthened by applied experience. This is what learning with AI looks like when it is designed for future careers. The goal of the Business Challenge is more than producing solutions, it is developing judgment. To help students move from using AI to guiding it. To replace career anxiety with professional readiness.

The democratization of AI means everyone has access to powerful AI tools. Yet not everyone will know how to lead alongside them. That difference is learned here.

“The democratization of AI means everyone has access to powerful AI tools. Yet not everyone will know how to lead alongside them. That difference is learned here.”

 

AI-ready professionals develop 5C-driven decision making. If any of these statements give you pause, look for learning experiences designed around intentional AI integration, Design Thinking, and pragmatic application. That’s where readiness is built.

Martin Böeckle, PhD is Professor of Information Systems at Hult International Business School. Patrick Lynch, PhD is AI Faculty Lead and Professor at Hult.