“I’m not interested in AI because …”

Well, I can honestly say I have had that response more times than I care to count. So I had to stop and ask myself, what is all this resistance about. I find the people saying it are often not clear on the reasons for the objections. So it’s time to address some of the concerns.

First, what are we talking about when we talk about “AI”? If we are going to object to something, we should know what it is.

AI, or Artificial Intelligence, is a big spectrum of technologies that have evolved with the dawn of computers. One could argue that the pursuit of artificial intelligence was a powerful catalyst in the development of modern computing. In the late 1930s and early 1940s, mathematician Alan Turing introduced the concept of the “Universal Machine”— now known as the Turing Machine. This theoretical construct could read and write symbols on an infinite tape, moving left or right based on a set of rules and its current state. It laid the foundation for programmable computers and the idea that machines could simulate any logical process.

Fast forward to today, and Artificial Intelligence can be broadly grouped into four overlapping categories—though these lines are increasingly blurred as technologies evolve and converge:

  • Predictive AI: These models analyze historical data to forecast future outcomes. You’ll find them in everything from weather apps to financial risk assessments.
  • Robotic Process Automation (RPA): RPA automates repetitive, rule-based tasks—like data entry or invoice processing—boosting efficiency in back-office operations.
  • Agentic AI: These systems act autonomously to achieve goals, often making decisions or taking actions without constant human input. Think of virtual assistants or AI-driven customer service bots.
  • Generative AI: This includes models like Large Language Models (LLMs), which create new content—text, images, code—based on learned patterns. They’re the engines behind tools like ChatGPT and Microsoft Copilot.

Of course, these categories aren’t rigid. Many AI systems blend elements from multiple types to solve complex problems. And with the pace of innovation accelerating, any attempt to neatly divide the field is temporary at best. Still, this framework offers a useful starting point for understanding what AI is—and what it’s becoming.

While today’s Generative AI models, including Large Language Models (LLMs), operate on vastly different architectures—using neural networks and transformer functions rather than tapes and state tables—they share a philosophical lineage with Turing’s vision: machines that process input, apply learned rules, and generate meaningful output. In that sense, the dream of intelligent machines helped shape the very tools we now use to build them.

The development of Artificial Neural Networks (ANNs), the foundational building blocks for Large Language Models (LLMs), first emerged at the University of Toronto in the mid 1980’s, through research being conducted by Geoffrey Hinton. Hinton would later go on to be recognized for his work in the field winning a Nobel price in Physics in 2024, for his work in machine learning.

Artificial Intelligence isn’t new. It’s been here for some time. It’s not going away. In fact we are just getting started.

With that history now in mind, here are a few of the most common objections to AI.

I’m not interested in AI because … it’s new so I’m just going to wait and see how it develops.

That’s a common stance, and it makes sense on the surface. New technologies often go through hype cycles, and it’s tempting to sit back and wait for things to “settle.” The thinking goes: “The rush will pass, and I’ll catch up when the dust clears.”

But here’s the reality: this isn’t a passing trend. AI isn’t just evolving—it’s accelerating. And unlike some tech waves that plateau, AI is being woven into the fabric of how businesses operate, how people learn, and how decisions are made. The pace of innovation is increasing, and so is adoption—across industries, geographies, and roles.

Waiting Isn’t Neutral—It’s Regressive

Choosing to wait doesn’t just mean missing out on the latest tools. It means falling behind:

  • Competitors are already integrating AI into workflows, gaining efficiency and insight.
  • Employees are upskilling with AI, becoming more productive and adaptable.
  • Markets are shifting toward AI-driven expectations—faster service, smarter personalization, better outcomes.
    By the time things “settle,” the landscape may have changed so dramatically that catching up becomes far more difficult.

A Smarter Approach: Low-Risk Exploration

You don’t have to dive in headfirst. But dipping a toe in now—experimenting with tools like Microsoft Copilot or ChatGPT—can help you:

  • Understand the capabilities and limitations firsthand
  • Identify areas where AI could support your goals
  • Build confidence and literacy before the curve steepens
    Think of it like learning to swim before the tide comes in. You don’t need to master it overnight—but you do need to start.

I’m not interested in AI because .. it is brought in to eliminate jobs.

That concern is absolutely valid—and it’s one of the most common fears surrounding AI. When we hear about automation replacing repetitive or “mundane” tasks, it can feel like a threat to livelihoods. But here’s the truth: it’s not AI itself that eliminates jobs—it’s how people and organizations choose to use it.
Just like the rise of computers didn’t eliminate office work—it transformed it. AI is doing the same, but faster.

The Real Shift: Efficiency vs. Replacement
AI isn’t coming for your job. But someone who knows how to use AI might. The competitive edge now lies in augmentation, not elimination. Those who learn to work alongside AI—using it to streamline tasks, generate insights, and boost creativity—are the ones who will thrive.

  • Workers who upskill with AI tools become more valuable, not less
  • Organizations that adopt AI responsibly gain efficiency and agility
  • Industries evolve, creating new roles we couldn’t have imagined a decade ago

Upskilling Is the Safety Net
Waiting for AI to “settle” or hoping it won’t affect your role is like standing still on a moving walkway. The key is to start small:

  • Try AI tools in your daily workflow
  • Learn how they operate and where they falter
  • Build confidence through experimentation
    Even introductory exposure can spark ideas and reveal opportunities for growth.

From Threat to Tool
AI isn’t a job killer—it’s a job shifter. And like every major technological leap, it rewards those who adapt. The goal isn’t to compete with machines—it’s to collaborate with them.

I’m not interested in AI because .. I don’t trust the results I get from AI.

That’s not an unfounded belief—especially in today’s landscape. We’ve all seen examples of AI-generated images with surreal glitches: a person with three legs, extra fingers, or eyes that seem to float off their face. These visual hallucinations are easy to spot. But when AI is used for data analysis, writing, or decision-making, the errors can be far more subtle—and potentially misleading.

The Risk of Misleading Outputs
AI systems can misinterpret context, fabricate information, or present data that’s outdated or skewed. These issues often stem from:

  • Training data limitations
  • Ambiguous or poorly phrased prompts
  • Overconfidence in AI-generated responses

And unlike a photo with three eyes, a flawed spreadsheet or misleading summary might look perfectly normal—until it leads to a bad decision.

So What Can You Do?

  1. Use AI as a sandbox, not a crystal ball
    Tools like Microsoft Copilot, ChatGPT, and others are best used as experimental engines—great for brainstorming, drafting, and exploring ideas. But they’re not infallible oracles.
  2. Vet the results, always
    Just like you wouldn’t blindly follow a GPS into a lake (though some have tried!), you shouldn’t accept AI output without scrutiny. Cross-check facts, apply your own judgment, and use domain expertise to validate what you see.
  3. Set realistic expectations
    AI isn’t magic—it’s math. Knowing what kinds of results to expect helps you spot when something’s off. If the output feels too confident or too vague, that’s a red flag.

Trust, But Verify
“Trust but verify” isn’t just a Cold War mantra—it’s a survival skill in the age of AI. The more we treat these tools as collaborators rather than authorities, the more value we can extract without falling into the trap of blind trust.

“I’m not interested in AI because .. the results are biased.”

You’re absolutely right. Bias in AI is a real and pressing concern. It stems from multiple sources, including:

  • Training Data: If the data used to train AI reflects historical inequalities or skewed perspectives, the system may replicate and amplify those biases.
  • Algorithm Design: Choices made during development—such as which features to prioritize—can unintentionally favor certain outcomes.
  • Human Oversight: Even well-intentioned developers and users bring their own assumptions, which can shape how AI behaves.

Bias doesn’t just affect accuracy—it can impact fairness, trust, and even safety. In fields like healthcare, hiring, or criminal justice, biased AI systems can reinforce systemic discrimination or lead to harmful decisions.

Organizations deploying AI responsibly are taking steps to reduce bias, such as:

  • Diverse and representative training datasets
  • Regular audits and fairness testing
  • Transparent reporting of model behavior
  • Human-in-the-loop systems to catch anomalies
    These efforts don’t eliminate bias entirely, but they help manage it—just like we do with any complex system.

Skepticism is healthy. But just as we’ve learned to critically evaluate information online, we can learn to engage with AI thoughtfully. The goal isn’t blind trust—it’s informed use. And with the right safeguards, AI can be a powerful tool that reflects our values rather than distorts them.

Conclusion
AI isn’t just arriving—it’s accelerating. Whether you’re curious, cautious, or skeptical, the smartest move is to engage now. The future isn’t waiting.

Need a deeper dive into introducing AI into your organization? Reached an impasse, either technical or organizational? Reach out to our team and explore the possibilities.

As always, Stay Curious. Answers may vary.

Rick Ross

CEO – Chief Engineering Officer, and your go to “AI Guy”.

SwitchWorks Technologies Inc.

Rick is an experienced IT consultant and a lifelong early adopter of emerging technologies. SwitchWorks Technologies Inc. is a digital engineering company helping organizations harness innovation to improve operations in meaningful, measurable ways. Have questions? Interested in finding out more about AI or other emerging technologies,? Feel free to ask

One response to ““I’m not interested in AI because …””

  1. Rick Ross Avatar
    Rick Ross

    Feel free to post comments on any of the content we create.

Leave a Reply to Rick Ross Cancel reply

Your email address will not be published. Required fields are marked *

More Articles & Posts