Wednesday, April 1, 2026

What are 3 big AI Shifts in the midst 2026?

AI is leaving the “wow” phase and entering the “prove it” phase. Many companies are about to realize their strategy was built for demos, not reality. In other words, AI is growing up. It’s time to move on from the hype toward the hard tradeoffs. What are 3 big AI shifts?

  • Magic to Money 
  • Demos to Deployment 
  • Capabilities to Consequences

For the past few years, AI has lived in the realm of spectacle—impressive demos, viral moments, and “did you see what it can do?” reactions. That phase is ending. What’s replacing it is more grounded, more valuable, and a lot less comfortable. 
AI is shifting from magic to money, demos to deployment, and capabilities to consequences. Let’s examine these more fully.
Magic to Money
The novelty is wearing off. Organizations are no longer impressed that AI can generate content, write code, or analyze data. People are asking whether it actually drives revenue, reduces cost, or creates a defensible advantage. This is where many AI initiatives encounter friction: what initially appears magical, struggles when tied to real business metrics, messy data, and existing workflows. A current challenge is proving consistent, measurable ROI, not just isolated wins and talk.
Demos to Deployment
We’ve all seen the polished demos. But deploying AI into production is a different game entirely. Integrations, governance, reliability, edge cases, and user adoption quickly surface. The gap between “it works in a demo” and “it works every day in real life” is where most efforts stall. The winners are no longer the ones with the flashiest models, but rather those who can operationalize them at scale. A current challenge of getting to “real life” is bridging the last mile from prototype to dependable, repeatable execution.
Capabilities to Consequences
As AI capabilities grow, so do concerns about accuracy, bias, job displacement, environmental cost, security, and trust. Leaders are increasingly forced to weigh not just what AI can do, but what it should do and what risks they’re willing to accept. The conversation is shifting from innovation to responsibility, often faster than organizations are prepared for. A current challenge is how to manage risk and accountability without slowing innovation to a crawl.
What shifts are you seeing, and what challenges are you facing in getting to the full usage of AI?






Monday, February 23, 2026

Risk of AI Sameness

Everyone is racing to adopt AI tools.  Most people are using AI to create “speed”.  Faster emails. Faster proposals. Faster code. Faster content. And yes, faster is good.  But here’s the quiet risk no one is talking about: if everyone in your company uses the same AI the same way, you may slowly start sounding exactly alike.

Are you ignoring “AI sameness” risk? Without intention, standard AI tools can homogenize your messaging, flattening distinct perspectives into one generic voice. The danger isn’t bad output. It’s average output at scale. AI isn’t going to replace your team. It’s going to standardize them, but not in a good way.

When everyone uses the same tools, trained on the same data, prompted in the same way, you don’t get divergence, you get convergence. Everyone will sound the same, with the same sentence structure and tone, and with the same “polished but generic” voice. Then there is the further danger that over time, unique thinking gets flattened into safe, average, AI-shaped output. Not because your people aren’t smart, but because the tool defaults to the statistical middle.

AI should amplify your edge, not sand it down. AI should sharpen your thinking, make it more opinionated, and more differentiated. If you’re not intentional about how your teams use it, it can lead to this sameness.  What can you do to avoid this sameness?

  • Craft your own perspective before prompting AI.  As it relates to the topic, what do you actually believe? What do most people get wrong about this? What would you argue in a debate?
  • Use AI to provide you with a draft, not the deliverable. Within that context, establish one strong opinion. Provide one specific example from your world. Craft your own voice so that sentences sound unmistakably like you.
  • Add Friction to the output. AI will often be a people pleaser, so challenge your output. Ask what’s missing. Determine if it feels too safe. Consider if it sounds too predictable? If it reads smoothly but doesn’t make you think, that’s a warning sign.

AI naturally drifts toward the statistical middle. Avoiding sameness requires intentionally looking for differences. It must include injecting strong beliefs, specific context, and human judgment layered on top.  And honestly? The companies that figure this out won’t just use AI faster. They’ll use it to amplify their uniqueness.



Saturday, January 31, 2026

AI Coding: Shifting the Developer Role

Coding with AI is producing code at a faster rate than ever and accelerating the release of production increments. The code can be generated in minutes and feels good because of how quickly it is created. This begs the question, what does the software developer do now?  It changes where the developer’s focus goes. 

While AI is generating the code, it doesn’t own the code.  The developer remains accountable for it, which means they must review the code deeply enough to understand how it works, why it works, and where it could fail. Also, they must focus on verification activities surrounding the code. This article is based on some experimentation with AI and ensuring the developer has a good understanding of the code changes. 

Think of AI as a Junior Engineer, and it is your job to raise them up. It can produce a lot of code quickly, and it can be confidently wrong when doing so. It has no sense of risk, context, or consequences.  Think of the verification as the handoff where ownership transfers to a human. It is still your job to ensure a verified and quality outcome. This should take a majority of engineering time or later on, logical gaps leading to failures in services, data, and infrastructure. How does the Developer responsibility shift?

  • Review code written for understanding.  Ask the AI tool to explain to you what this code does, line by line. Ensure it's not vague and be sure it aligns with what you are thinking. Ask what the expected outputs and outcomes would be. Then ask what would break the code. Finally, ask AI why this approach was chosen over alternatives. A useful litmus test is if you wouldn’t feel comfortable maintaining this code for the next year, you don’t understand it well enough
  • Ensure that the code was version-controlled properly and in the correct branch. This includes checking for potential issues before merging it into the main codebase.
  • Step up Code Reviews.  This means to peer-check code for quality and adherence to standards.  The Developer should share coding standards with the AI tool to ensure it aligns with standards. If coding standards are missing, then they must be written and then added to the AI tool’s vector of information.   
  • Spend time sharing responsibilities with Testing to ensure all verification activities are completed.  This should include appropriate testing: Unit Testing (e.g., testing individual components or functions in isolation), Integration Testing (e.g., testing how different components work together), System Testing (e.g., testing the system as a whole for speed, scalability, and stability), and more. 

AI reduces typing time. It does not absolve you of the responsibility and judgment for a product well built! AI changes where time is spent, not whether time is spent. While we will spend less time coding, we are still accountable to spend time verifying and understanding the code that has been generated to ensure it meets the needs of the outcomes we are looking for.