
This article gathers a few ideas and feedback from the trenches after training a dozen professional developers in agentic AI frameworks in 2025.
AI engineering is a new discipline and requires new trainings
I understood early that agentic frameworks were not just new shiny set of tools but the embodiment of a new discipline, AI engineering.
It's important as a professional trainer to think in such terms, because an emerging discipline generates a need for new concepts, new practices, and eventually new courses and training programs.
On the HR side, some lobbying must be done to help company understand how to properly recruit an AI engineer.
Namely, agentic AI is closer to software engineering than it is to data science. For instance, you may still want to recruit a brilliant software engineer for building an AI system even when they know nothing about statistics.
Here's a deeper explanation backing this conclusion.
The point of agentic AI is that LLMs are known for their generative capabilities, but what people often miss is the foundational perspective.
Generative AIs are also dubbed "foundation models" because they replace the traditionnal approach of training a new machine learning model for each and every problem, by specializing a very big model on the fly using a prompt.
Roughly put:
"LLM + prompt" replaces "ML model + dataset + training"
This is the cornerstone of agentic AI, because taking a decision can be formalized as labelling a problem with the proper answer among a set of possible decisions.
In the past the limitation was that each type of decision would have required its own machine learning model, dataset, and training process.
Now each type of decision may only require a single prompt to be handled. That's the recipe for autonomous AI agents. Put that in a loop, add tools to act on the real world, and you have invented the ReAct architecture aka the agentic loop.
I am simplifying and you'll have to do a lot of context engineering for this to work in real-life, and sometimes you won't escape from fine-tuning or even training a new model.
But still, the advances in coding agents have proven this approach relatively correct.
Now let's go back to my training story. Before diving deeper into the AI part, let me explain what my job is about in the first place.
Remote sessions versus self-learning: learnining AI with teacher is faster
I run synchronous, short remote sessions and I am often confronted to developers wondering if professional training is really useful or if I deserve to go to hell with other consultants and carpet sellers.
It makes sense, after all you can learn frameworks on your own; LangChain does have an excellent free e-learning academy. This is the resource I've used myself, so why paying a trainer when you can learn for free?
Because self-learning is dead slow. I often explain to potential customer that I don't sell skills that much, but rather time.
My own biased estimation is that I compact into 3 days concepts that you would usually take 3 weeks to properly grasp.
It took me almost all year to get a solid knowledge of generative AI models and agentic AI patterns. You may not have that much time in your startup or SME leading an AI project awaited by your customers.
Some may argue that taking time to learn helps having a better understanding, but in my experience learners often just waste their precious time because they are blocked by a question they can't articulate, an incorrect assumption, a dull bug they couldn't fix, and so on. I help getting rid of that useless kipple.
Teaching LangChain to Python developers
LangChain is the de facto standard framework for agentic AI. It's one month older than ChatGPT itself (LangChain was first released in October 2022).
LangChain exists both in Python and JavaScript to serve as many users as possible, though I only teach LangChain in Python (you'll understand why in the next section).
It's also the first training that I've opened up to unemployed developers. Ironically, when you are out of job, you are also out of company to pay for you training, so it's more difficult to learn new skills.
That's a pretty vicious circle and accessing public funding in France (MonCompteFormation) is next to impossible for an SME like mine. Yet we did it and we are grateful to be part of some people's path to finding a new and better job.
The first training sessions validated our hypotheses about AI engineering. Participants were not data scientists or ML engineers, but IT professionals from various positions who use Python only here and there.
They loved the idea of building their own agentic system and mostly apply their knowledge to enhancing existing information systems, for instance by plugging a RAG pipeline over company wikis.
I love these "dull" use cases because they are so far away from the VC-funded AI-saves-the-world vibes we currently get from United States and some local influencers.
Most French companies are SMEs (90% of companies and 50% of jobs), they don't care about AGI, they care about a sustainable growth and reducing the weight of administrative and commercial tasks. Agentic AI and generative AI helps with that.
Initially, LangChain was the low-level integration logic, LangGraph the agentic framework, and LangSmith the monitoring platform.
It was a tad complex and blurry, and complex and blurry things are not fun to teach. Trust me, I also teach Next.js growingly intricate and vulnerable architecture: it has lost its fun a few years ago already.
The release of LangChain v1.0 was a big relief with a clearer separation between open source frameworks (LangChain, LanGraph) and the commercial platform (LangSmith).
Also, LangChain now includes an agentic loop out-of-the-box, covering maybe 80% of the use cases you'll ever have for AI agents. This is closer to what other frameworks like Haystack, CrewAI etc. define agents.
LangGraph is still central to my 3 days training, as it allows to go beyond the agentic loop and craft your own agent architecture and AI workflows. This is necessary for more complex AI projects in bigger companies or startups.
Teaching Mastra and Vercel AI SDK to JavaScript developers
I've trained a first group of 5 frontend developers and 1 backend developer to the agentic framework for JavaScript, Mastra.
Not to be conflated with our French LLM provider Mistral.
The training program starts with Vercel AI SDK, then goes onto Mastra agents, to finish with deployment, monitoring, and other production concerns.
I've picked Mastra as the main technology for my JavaScript agentic AI training, rather than the more natural choice of LangChain.js, because I am a strong believer of idiomatic code.
Mastra is created by the makers of Gatsby. They are web developers, they built an AI framework for web developers. It will always be more intuitive than a JavaScript translation of a Python framework.
The first training I gave in december (after a short GitNation Mastra workshop) went SUPER smooth. It's easy to teach Mastra to web developers. The interactive Mastra Studio is very helpful too.
I again validated the idea that AI engineering more about engineering than coding. Web development is facing a crisis worldwide, with a sharp reduction of job offers.
I won't claim that fully pivoting to agentic AI is the way to go, as the market is still nascent. Yet enhancing your JavaScript fullstack development skills with AI engineering skills is definitely a good move forward.
Towards 2026 : teaching Cursor and AI-assisted coding to developer teams
I've spent 2025 teaching agentic AI and it's been hell of a ride. I think I've never learnt so many things at once in my life, a total mindshift.
After understanding how to craft agents, my new focus for 2026 is naturally how to use them for productivity as a developer.
I've picked Cursor as I think it checks all the box for education:
- It's a text editor, so a tool developers are already familiar with;
- Yet it's also an AI agent usable through CLI, so you're not limited in your learning and you can discover more advanced AI-assisted development patterns like orchestrating background agents;
- They recruited Lee Robinson (ex Vercel) who produces quality Cursor resources for beginners;
- They build their own LLM model Composer, indicating that Cursor is a serious player and here to stay in the wild world of coding AIs.
Teaching Cursor is quite a challenge though, because each context where you apply it is totally unique. I've designed a first Cursor training that can adapt to any project, and a second Cursor training specific to web development.
To sum it up, 2026 will be the year where I teach how to write agents with agents!
Article written by hand with my heart, cover image generated by blending free images from Pexels using ChatGPT.
