Stop reading articles about AI. Start doing something about it.
That may seem like a strange opening for an article about AI. But the single most damaging thing happening in boardrooms and corner offices across America right now is not ignorance about artificial intelligence. It is the peculiar paralysis of people who understand perfectly well that this technology is transforming their industry and have spent eighteen months attending webinars, commissioning reports, and forming working groups - without actually changing anything.
The window for a considered, unhurried approach to AI transformation has closed. It closed some time ago. McKinsey's 2025 State of AI report found that 92% of companies plan to increase their AI investment - but only 1% have achieved full operational integration. Your competitors are almost certainly in that 91% who are spending money and generating noise without generating results. Which means this is your moment to be different. Not eventually. Now.
Here is what you need to do.
Do not convene a task force. Do not hire a consultant. Do not wait for a strategy to emerge from below. The evidence on this is unambiguous: PwC's research into AI front-runners found that the companies seeing real returns share one common trait - senior leadership picks the spots. Crowdsourcing AI initiatives from across the organization, as most companies have done, generates impressive adoption numbers and almost never produces transformation.
This week, you need to identify two - not twenty, two - areas of your business where AI can deliver a clear, measurable improvement within ninety days. The criteria are simple. You are looking for processes that are high-frequency and repetitive, where the bottleneck is human processing time rather than human judgement, and where your data is already reasonably clean and accessible. Invoice processing. First-pass contract review. Scheduling. Generating first drafts of routine client communications. Customer query triage. Demand forecasting. Sales report generation.
Ask yourself three questions about each candidate. Where are our decisions slow, expensive or inconsistent - and could AI make them faster, cheaper, or better? Where are skilled people spending hours on tasks that don't require their expertise - and could AI take those hours back for the work that does? Where does poor or slow information cause us to make worse decisions - and could AI surface what we're missing?
The cases that answer yes to all three are your best bets. Pick the two with the cleanest data and the most enthusiastic line manager, and move directly to a pilot. Define what success looks like before you start - not "improve efficiency" but "reduce the time from customer enquiry to quote from four hours to forty minutes." Set a ninety-day clock. Report the outcome, including what didn't work.
That pilot is your lever. Not for the efficiency gain, though that matters. For the cultural change that comes from people seeing AI work in their own context, on their own problems. Nothing does more to shift scepticism than a real result that belongs to your team.
Here is the number that should be keeping you up at night, and probably isn't: 89% of your employees are worried about what AI means for their job security. Not mildly interested. Not cautiously curious. Worried. According to EY's research, 65% are specifically anxious about being replaced. And 45% of CEOs - your peers - have already reported that their employees are reluctant or hostile toward AI adoption.
That hostility is not a communications problem you can message your way out of. It is a rational response to real events. Klarna replaced hundreds of customer service staff with AI and publicly celebrated it. Amazon's CEO sent a company-wide memo acknowledging that AI would shrink the workforce. Your employees read the same news you do. When you ask them to embrace a technology while refusing to discuss what it means for their futures, you are not managing their anxiety - you are compounding it.
The research on what actually reduces AI anxiety is specific, and it may surprise you. EY found that employees would be more comfortable with AI if senior leadership promoted it "responsibly and ethically" - not if leadership promised jobs were safe. Responsibility, not reassurance. Honesty about what you know and don't know builds more trust than a guarantee you may not be able to keep.
This month, have the conversation you have been avoiding. Gather your teams. Acknowledge that AI is changing work - including theirs. Explain what you are piloting and why. Be honest about uncertainty. And then ask them something that most employers have not thought to ask: where do you think AI could help you do your job better? The same EY survey found that 77% of employees would be more comfortable with AI if workers from all levels were involved in the adoption process. People support what they help build. That is not a soft people-management principle. It is competitive strategy.
The AI tools available today will be different in twelve months. The fundamental skill of working effectively alongside AI - knowing when to trust it, when to push back on it, how to prompt it well, how to apply human judgement to its outputs - will not change. That is what you need to build, and you need to start building it now.
The World Economic Forum's 2025 Future of Jobs Report projects that 92 million jobs will be displaced by AI by 2030 - and that 170 million new ones will be created. The net figure is positive. The transition is not painless, and it falls disproportionately on people who don't get meaningful support from their employers. The organizations investing in workforce development are, per Deloitte's 2025 Human Capital Trends, 1.8 times more likely to report better financial results. The business case for upskilling is not altruism. It is arithmetic.
What this looks like in practice: role-specific AI training, not generic AI literacy sessions. Your accounting team needs to understand which analytical tasks AI now handles and what excellent accounting looks like in that context. Your customer service team needs to learn how to work alongside AI tools without feeling surveilled by them. Your salespeople need to know how to use AI-generated insights to have better conversations, not be replaced in them.
This quarter, you also need to do something that almost no employer has done: update your performance review framework. If you are asking people to adopt AI and work differently but still measuring the same inputs - hours worked, volume of output, processes followed - you are sending contradictory signals. An employee who uses AI to accomplish in ninety minutes what used to take a full day is either your star performer or your next resignation, depending on how your system reads that outcome. Lux Research puts it directly: leaders must "proactively adapt promotion practices to recognise and reward employees who demonstrate thoughtful, effective use of AI." Do this before you push further AI adoption, or you will lose the people best placed to lead it.
There is one intervention with an outsized effect on AI adoption inside organizations that costs nothing and requires no technology budget. It is leadership role-modelling, and most senior leaders are failing at it.
McKinsey's research on AI high performers found they are three times more likely than their peers to have senior leaders who actively demonstrate ownership of and commitment to AI - not in board presentations, but in how they work. When a leader uses an AI tool in their own practice, talks about what they tried and what surprised them, shares a result that saved them two hours on a task they used to dread - they grant the entire organization permission to do the same. When that same leader talks about AI at every all-hands meeting but visibly never touches it themselves, everyone notices.
Read next: Are your staff any good 'at' AI?
You do not need to become a technical expert. You need to become a visible learner. Open an AI tool today. Use it for something real - a memo you need to draft, a piece of market research, a presentation you've been putting off. Notice what it does well and where you had to push back. Tell someone on your team about it tomorrow. That is it. That is the first step, and it is available to you right now.
Harvard Business School's faculty, surveyed earlier this year on what leaders most need to do in 2026, described the imperative as building "change fitness" - making adaptability itself a core organizational capability, not an afterthought. The leaders who model that adaptability publicly are the ones who create cultures where their people can do the same.
Moving fast on AI without building governance alongside it is how organizations create liability, not advantage. This is the least glamorous part of the conversation and the most important to get right early.
You need, within the next thirty days, to establish clear policies on four things: which AI tools employees may use and for what purposes; how AI-generated outputs are reviewed before reaching customers or decision-makers; how employee data privacy is protected when third-party AI tools are involved; and what happens when an AI output is wrong, biased, or legally problematic - because it will be, eventually.
PwC found that 60% of executives say responsible AI boosts ROI and efficiency, but nearly half say translating responsible AI principles into actual operational processes has been their hardest challenge. It is not hard because the principles are complicated. It is hard because governance requires deliberate work that does not generate press releases. Build the review processes, the audit trails, and the escalation paths before you need them. Once you need them, it is too late.
The case for urgency is not that AI will make your business obsolete overnight. It probably won't. The case for urgency is compounding.
The organizations that began their AI transformation eighteen months ago are now on their third or fourth iteration. Their pilots have become products. Their anxious employees have become advocates. Their workflows have been redesigned, not patched. The gap between them and organizations that are still deliberating is not growing linearly. It is accelerating.
IDC research shows that for every dollar invested in generative AI, organizations are realising an average return of $3.70 - with the top performers achieving $10.30. Google's research found that 45% of companies seeing productivity gains report that generative AI has doubled employee output. These are not projections. They are reported outcomes from organizations that started earlier than you and kept going.
The most dangerous place to be right now is in the vast middle - knowing enough about AI to feel informed, but not yet doing enough to be competitive. That is where most organizations are. It is comfortable, and it is costly.
Your competitors are not waiting for the perfect strategy, the complete data infrastructure, or the fully developed governance framework. They are running pilots, learning, iterating, and building the institutional knowledge that will compound into advantage over the next five years. The question is not whether to join them. It is how quickly you can start.
The technology will not wait for a more convenient moment. Neither will the market.