At the beginning of every year, I aim to write something forward-looking based on my experiences as a CISO – reflecting on the past year and what I expect to unfold in the year ahead. What I write about is often a reflection of my expectations for the coming year(s) and what I know my peers are thinking about. As I sat down to write this year’s post and think about what topics are being talked about at-large today, AI came to mind. My take specifically is that I am burnt out by all things AI and based on discussions with peers, I know others in the cybersecurity community feel the same way too.
This realization led me to take this opportunity to explore my apprehension in a bit more depth, and uncover some of the root causes for those feelings for myself and what others in the industry may also potentially feel.
The Internet is incredible. It has enabled so much opportunity for so many, including those with malicious intent or lacking a moral compass. This challenge is why I chose to work for Yubico: to support its mission to make the Internet more secure for all.
The use of generative AI in recent years to rapidly create content has flooded all corners of the Internet without a commensurate amount of “useful” content, leading to a feeling of lopsidedness. As a result, the Internet has been filled with misinformation (accidental and purposeful), scams, and a variety of get-rich-quick schemes much more rapidly than in the past. This has led to a massive increase in inquiries from aging family members about convincing scams, discussions about topics that lack any basis in fact, and significant uptick in sophisticated social engineering.
Part of the apprehension around AI is the chasm between the expectations and reality. AI is marketed as something that can perform entry-level duties on the path of “freeing” humans from the need to work at all. With this promise, AI has seemingly been integrated into everything we see, use and interact with on a regular basis – including home appliances. As someone who is excited about technology, I’ve spent a lot of time trying to find tasks that I can offload to AI – and have found myself largely disappointed, with an exception for note taking, troubleshooting logs, and some remedial information gathering (opting out of the topic of the purposeful degradation of search over the last decade).
Beyond social engineering and misinformation, the inclusion of AI into a majority of the products and services, from our CRM to our browsers, has led to a dramatic increase to our threat landscape that includes traditional concerns about data security and intellectual property rights in addition to new threats like prompt injection, poisoning, hallucinations, and issues with its indeterministic nature. This increase in risk doesn’t come with a commensurate amount of value today. The hype around AI has led to massive demand from our internal stakeholders as they try to come up to speed with the technology. This has created a unique challenge for risk management and security organizations as we attempt to navigate the right balance between supporting and protecting the business.
The world has invested $544.6 billion in AI over the last 5 years. In comparison, the United States invested over $120 billion in today’s dollars into the Apollo program between 1960 and 1973. I did not watch in real time as Americans first set foot on the moon, but I suspect the achievement feels different than when I first used ChatGPT or Gemini. I feel society has been forced into an early adopter role under the guise that AI is ready for prime time. I don’t think it is right now, but I am continuously told differently by vendors and individuals that have a vested interest in changing my opinion.
The security team recently reviewed an internal tool that was vibe coded into existence by an individual working with a technical stack that they weren’t very familiar with. On the surface, a complete solution was developed in days instead of weeks. It has a fully functional and modern user interface (UI), a full featured set of APIs, and robust logs. It also came with a comprehensive threat model that highlighted threats and mitigation strategies. However, as the security review progressed, we quickly realized that the authorization model wasn’t applied consistently. Specific APIs weren’t performing appropriate authorization checks, the logs included sensitive information and were incomplete, and authentication could be bypassed if a JSON web token (JWT) was omitted among other issues.
Using agentic AI to address these issues produced other issues, and occasionally, comprehensive and assertive documentation about a fix that was never applied to the code base. In some respects, this is worse than a Junior Engineer. My apprehension about AI is that I feel we’ve invested an exorbitant amount of money and time in AI – and it has created an asymmetric advantage for the adversary, whether that’s an attacker or the contributions to digital pollution.
To be clear, I am not anti-AI; its usefulness has rapidly improved over the years, and I anticipate that it will eventually help the general public in a meaningful way. In the meantime, we should focus more efforts globally on accelerating the adoption of digital content transparency and authenticity standards to help everyone discern fact from fiction and continue the phishing-resistant MFA journey to minimize some of the impact of scams. For example, it will be critical for the industry to focus on identifying the types of cybersecurity threats driven by AI and associated countermeasures that will be most prevalent, including identity-based attacks.
For those that would like to debate what 2026 might bring to the table for security leaders and organizations, the prompt ‘What types of cybersecurity threats and associated countermeasures will be prevalent in 2026?’ might get you started.
