Skip to content

AI and The High-Agency Mindset

It was only late last year that I heard the term "high-agency mindset", even though it's nearly a decade old. But after learning about it, I started to recognise signs to some extent in myself and some others I know. It became understandable why high-agency individuals surround themselves with other high-agency individuals, and why they can become frustrated with low-agency individuals. Where others see blockers and wait for (or expect) someone else to clear them, high-agency mindset individuals can see solutions or expect people to take responsibility for resolving them - or else they just go around them. It’s a mindset that aligns well with the concept of Directly Responsible Individuals or DRIs.

But it also became apparent that AI could have a big impact on the future of the two high-agency and lower-agency individuals.

The High-Agency Mindset

High-agency is a mindset and behavior characterised by taking extreme ownership of one's life to create opportunities, rather than waiting for them or accepting limitations. I would personally categorise myself as a fairly high-agency realist. I accept some limitations, a viewpoint that seems at odds with a literal approach to high-agency, and I can often procrastinate. But there are many situations where I have sought out opportunities. And when I fully embrace opportunities I often do whatever possible to optimise the chances of success. I am also not discouraged by learning new technologies and taking calculated risks. And when I make mistakes, I try to learn from them to avoid repeating a mistake.

AI Use

High-agency individuals in IT are probably already using AI quite heavily. Even if their day-job does not offer lots of opportunities to use AI, they will probably be using it in personal projects or research. Although if they are high-agency individuals, they have probably sought out jobs that embrace new developments like AI.

I can see three different categories of people with the high-agency mindset using AI.

Risk Takers

There are many of these, as has been apparent from the adoption rate - beyond viral - of OpenClaw. It’s well-documented online about the rate of GitHub stars and the impact it had on Cloudflare’s share price. What’s also well-documented are the security issues with the project, which the creator Peter Steinberger was open about. Whether some of the outcomes highlighted on social media were deliberately engineered, it’s unlikely that all were. The benefits that have been reaped by the risk-takers are also undeniable.

This group will definitely be at the forefront of early adoption. Because of this, even though they may make mistakes, they will benefit from the improvements because of the other early adopters. But they will also enhance the many innovative open source projects with real world testing and pull requests.

Innovators

The innovators will also be among the early adopters of AI, as they are for any new technological advances. They will test the boundaries of AI and bend it in many new directions - as has been shown by Peter Steinberger himself. They will use AI to speed up their professional and personal work.

The innovations will benefit others. But a lot of their work will be open source, and benefit from other early adopters.

Deep Thinkers

The deep thinkers will focus on understanding the technology, trying to work out why it goes wrong, improve it, and solve problems. A good example here is Geoffrey Huntley and his ralph loop. But there are many academics involved as well. A great example of this is the MIT report on Recursive Language Models - a technique rather than a type of language model. This group also focus on avoiding duplication of effort, which is why we have Model Context Protocol and AGENTS.md as part of the Agentic Ai Foundation. Their understanding will help them get the most from their use of AI.

But these are also highly engaged individuals who share techniques that help everyone. But it”s already apparent that the major AI tool vendors are integrating the learning into their tooling. goose has had livestreams and tutorials covering the ralph loop and Gas Town. Claude Code has also leveraged ralph loop and evolved using techniques from other projects like OpenClaw.

Low-Agency and AI

The low-agency individuals will be reluctant to use AI for a variety of reasons, including but not exclusively:

  • They have not been told they can use AI and do not want to take responsibility for the risks.
  • They are highly risk-averse.
  • They are resistant to change.
  • They measure their worth in their knowledge of the code base and their ability to write the code. They feel AI might take those away.
  • They are happy being just “code monkeys”. They do not wish to or lack the skills to create the plan and orchestrate an agent or team of agents.
  • They want to use tools and technologies that “just work”. That’s not AI, at least not yet.
  • For various reasons they don’t want AI to succeed, so look for use cases that fail or give up quickly.

For these and other reasons, the low-agency individuals and organisations will be late and slow adopters of AI.

The Gulf Widens

But AI adoption is like travelling at near light-speed, while those who don’t adopt AI are still on earth: the distance in time between the two parties grows exponentially. And the time dilation is growing at the bleeding edge, where models even released in the last few weeks enable a level of agent autonomy and speeds of development that could not be envisaged even in autumn 2025.

And a discussion with AI on that concept raised another valid point. The "Event Horizon": There may come a point where the gap is so large that the laggards can no longer see or understand what the leaders are doing. The "distance" becomes a barrier to entry that is impossible to close. I’ve seen this in other technology areas, where those who understood and shared moved away. Those left behind were floundered because they lack the skills to get to the level of understanding that existed, and those with the skills are off doing more self-fulfilling things.

The Shake-Up

Let’s not sugar-coat this. Some of the late adopter companies will disappear, not willing or unable to adapt - the “Blockbusters” effect, as it were. Some low-agency consultants will do what customers tell them and lose out because their work is uncompetitive with black-box off-the-shelf or custom solutions that do use AI. That is, what they customer says they want and what their willing to do when budgets cut will prove to be two different things, something that has been proven before.

But equally some risk-takers may become media horror stories. If they are lucky, media and laggards will just sneer at their errors while revenue streams suffer just a blip. A few may get hit harder. They will make headlines, but they will just be distractions to satisfy those resistant to change.

The benefits of AI are clear. It’s here to stay. But it needs a workforce with a mindset and skills that are different to the human coding era. Some of the existing workforce will adapt easily, some will be able to learn, but some will never make the transition. I am sure there will still be jobs for them, for a while, as with all legacy systems and customers who use them. But they will be jobs for an increasingly ageing workforce, not career paths.