Your "10 Years of AI Experience" Requirement Is Impossible. And Embarrassing.
Anna AugustShare
We’re all riding the contemporary bubble called “AI.” Oh, how wonderful the ride is: it promises much, gives a taste of novelty, excites. And yet, it sometimes causes form to exceed substance.
The job market has also gotten swept up in this hype.
More and more job postings look identical today: “hiring an AI specialist with 3, 5, 8, 10 years of experience working with AI.”
Seriously? Someone was working with generative AI ten years ago?
I get it. AI as a field of science has been developing since the 1950s. Machine Learning and Deep Learning have existed for decades. Someone could have been building recommendation systems in 2010, doing computer vision since 2014, training predictive models since 2015. Those could genuinely be years of experience.
But the recruitment boom we’re seeing today isn’t about that. It’s about generative AI: AI chatbots, GitHub Copilot, LangChain, image generation — tools that entered mass adoption in 2022-2023. It’s about prompting, building RAG pipelines, fine-tuning language models, creating autonomous AI agents, understanding hallucinations and bias in LLMs.
Nobody was doing this 10 years ago. Because it didn’t exist.
And here’s where the problem begins.
The market is behaving as if there’s a large pool of people with years of experience in this (current) form of AI. Meanwhile, such a group is very small and concentrated in just a few organisations. There’s a narrow circle of specialists who participated in early AI programs (Google Brain, OpenAI, DeepMind, Microsoft Research), but they’re out of reach for most companies looking to hire. Just look at what Zuckerberg is offering for such unicorns.
A company posting such a job ad is shooting itself in the foot. It shows that it doesn’t understand AI itself and doesn’t know how to approach the topic. Most likely, this same company hasn’t yet shipped any generative AI project and is looking for someone to do it for them.
What a flipped dynamic.

But this isn’t the way.
Dear companies, you’re exposing yourselves and revealing your weakness: a lack of understanding of the subject. This should be a clear signal to potential candidates considering applying that inside the organisation, they’ll most likely find chaos and disorientation. Everyone driven by big slogans, but nobody knows or understands anything. Probably everyone is dealing with everything: decision-making diluted across management, responsibility for the topic scattered across teams. So the whole construct stands at the starting blocks, tied up, unable to begin the race. Or (worse still) it’s already started running, but the awareness of being tied up will only come after several months.
So what’s needed instead?
Skillful selection of employees with potential. Yes, with potential, not ready-made skills.
This world of generative AI is so fresh that it requires adaptation. There are no shortcuts. And the professional environment is precisely that responsible institution which should provide space to acquire skills on the road to project success.
So if the role is highly technical, let it be someone who’s worked with data science, machine learning (NLP, credit scoring, fraud detection, computer vision). That’s a good start. There are many who’ve been immersed in this for years and can translate it into the world of LLMs.
If it’s not that technical, consider this: what skills enable a potential employee to embrace understanding AI and its nuances? What tools can such an employee bring that will pay off in the AI direction?
Will it be data management and all forms of data manipulation? I think so.
Working with automation tools (Zapier, n8n, Make) or any automations at all (system integrations, BPMs, schedulers, API workflows)? Oh, yes, please.
API literacy? Of course.
Is this AI? Absolutely not.
Are these individual building blocks that will enable building something valuable with AI? Hell yes.
But not only that.
There’s a whole group of people who are beautifully suited to working with AI through a range of skills acquired through learning (not necessarily hard technical competencies):
-
Structural thinking
-
Pattern recognition
-
Information quality management
-
Process thinking
-
The ability to ask questions
-
Context management
-
Risk and consequence assessment
These types of skills sometimes hide under truly remarkable professions. So why limit your chances of getting the ideal employee, who will require fine-tuning and upskilling but who, having these foundations and space for further learning, will very quickly become added value in any AI project?
But this requires awareness in organisations.
What AI is and what it isn’t. What it’s based on. Where its strengths lie and where its weaknesses. What possibilities it has, where its gaps are, where it will provide support, and how to communicate with it.
This is homework for organisations to do. Don’t offload this onto your still-future employees. They’re not with you yet, and you already want them to be your lifeline.
If an employer wants to succeed, they need the right people.
And to have them, they must recruit them wisely.
Requiring five years of experience in generative AI is often simply misguided. Most people who truly understand today’s wave of AI don’t yet have that kind of tenure, but they have competencies, curiosity, and first steps already behind them.
Seeing such a posting, they often don’t apply because they know what’s formally expected of them and what they can’t yet demonstrate.
At the same time, there are also people who don’t even know that their existing experience is extremely valuable today.
It’s the company that should be able to spot potential, not just count years.
The market doesn’t need “AI era veterans” (because there are few of them, and they’re still babies anyway). It needs people capable of understanding and co-creating it.
