Ep. 3 | On the Edge: From the Front Line to the Frontier of Innovation, Meet the French Startups Building Tomorrow’s Defense
An exclusive dive into the French startups redefining the rules of defense. Ep. 3 | Hardcoding the red line & The war of tomorrow is being shaped today.
This article was written in partnership with Hexa. Hexa recently launched a new accelerator, Hexa Sprint, which is currently supporting defense startups through its program.
If you’re building in defense and looking to accelerate your growth, you can apply here.
Welcome to the third and last episode of “On the Edge.”
Over the past two weeks, we explored the new realities of modern warfare. In case you missed them, you can catch up right here:
Episode 1: New Rules, New Players, New Game — How a new generation of startups is using software and rapid iteration to challenge legacy defense cycles.
Episode 2: Produce or Perish — Why mastering the physical supply chain and mass production is the ultimate test of European sovereignty.
But hardware and code mean very little without the frameworks that govern them. Today, in our grand finale, we tackle the most complex battlefield of all. In “Hardcoding the Red Line”, we dive into the moral and regulatory boundaries of building AI for the frontline, before zooming out to look at what it will actually take to build a unified, resilient European defense ecosystem for tomorrow.
Part 3: Hardcoding the Red Line
The sovereignty imperative goes far beyond simply mastering hardware components. Today, the true frontline lies in the adoption of artificial intelligence within our defense systems. While the technical and strategic challenges are vast, the most pressing questions remain ethical in nature and fundamentally entangled with the growing autonomy of our arsenals.
Many whistleblowers are raising concerns about the rapid development of AI. Could an artificial intelligence with amplified capabilities become a systemic threat to humanity? While these narratives echo the most famous works of science fiction, the concerns are real and voiced by leading figures in the ecosystem itself. In a recent essay, Dario Amodei, CEO of Anthropic, clearly outlines the threats posed by AI if it is militarized without safeguards: the risk of a hostile nation achieving military dominance or launching large-scale cyberattacks, the appropriation of these tools by terrorist groups to multiply their destructive capacity, or the concentration of absolute power in the hands of dictators.
Described as “patriotic” by Arnaud Valli, this essay is a strike warning against the potential excesses of defense AI, but also raises deep philosophical questions. Martin de Gourcuff shares that wariness, and confirms that there is no such thing as zero-risk when dealing with fully autonomous systems:
“LLMs provide access to a vast body of knowledge and open up a wide range of possibilities. Unfortunately, it is a technology that can be used for harmful purposes. How worried should we be? I’m not entirely sure. But we must not forget that this tool is not fundamentally constrained to do good.”
In this context, a key question emerges: does the growing implementation of AI in Defense inherently represent a risk?
The startup paradox: more AI for more ethics?
When it comes to weaponry, AI primarily translates into autonomy. While automated systems (drones, missiles) have existed for decades, the current robotization of the battlefield marks a new step forward. Yet for our startups, the conclusion is unanimous and almost counterintuitive: controlled automation enables a level of efficiency and performance that strengthens the ethics of strikes.
In practice, this plays out in one concrete advantage: video feedback from autonomous drones makes strikes reversible. Where launching a traditional missile is irreversible, an interceptor drone strike can be canceled up to the final five seconds. Hadrien Canter recalls a tragedy that could have been avoided:
“When pro-Russian separatists and Russian armed forces shot down civilian flight MH17 over Ukraine in 2014, they believed they were targeting a Ukrainian military aircraft. [...] Today, when a drone is launched to intercept a target, if we realize that the radar lock was actually on a friendly or civilian drone, video feedback allows us to cancel the strike at the last second.”
Harmattan AI takes the argument one step further: specialization itself is the safeguard.
“We are developing Gobi, a drone interceptor. Unless we modify its code, it can never be used to target civilians. From that perspective, it is almost more secure than a traditional anti-aircraft gun, which can easily be turned against anyone,” explains Martin de Gourcuff.
Automation is not limited to deploying swarms of drones; it relies on the resilience of communication networks to operate a global command and control (C2) system, similar to the American Combined JADC2 program. However, Arnaud Valli points out the fragility of this hyper-connectivity: “This connectivity can always be compromised, even if Starlink or other constellations make 5G available everywhere.” To what extent can armed forces rely on cloud availability in the middle of combat?
What makes this automation inevitable is a single imperative: accelerating the Kill Chain, the sequence of processing a target. The goal is to drastically reduce the time between detecting the enemy and neutralizing it.
“The Russians have improved their detection-recognition-strike system in a very impressive way: previously, it took them five minutes to identify and destroy a target, now it’s under 30 seconds,” notes Arnaud Valli.
AI is revolutionizing targeting, even reshaping the economics of war, as observed through the Iranian conflict.
“During the first strikes [recent ones in the Middle East], there may have been 1,000 strikes managed by only around a hundred analysts… That is 100 times fewer personnel than during the Gulf War, for four times as many strikes. When applied to data aggregation, this multiplier effect is terrifying. But ultimately, it does not win a war on its own,” tempers Arnaud Valli.
The “Human-in-the-loop” and the wall of operational reality
Despite these advances, operational safety is only guaranteed if humans remain in the decision-making loop. AI helps reduce the operator’s cognitive load by filtering information. “AI provides suggestions, but humans remain at the core of the decision-making process,” insists Hadrien Canter.
Technically, autonomous defense systems face the same limitations as autonomous vehicles: handling rare events. Martin de Gourcuff illustrates this challenge:
“If a monkey crosses the road in front of our car, we understand the situation. The algorithm, however, does not: it either misidentifies the obstacle or makes a poor decision because its training data contains very few instances of monkeys crossing roads.”
These perceptual limitations, combined with sensor constraints (sunlight, hallucinations…), slow down the development of fully autonomous systems, even though technologies such as World Models aim to address them. Hadrien Canter sets the record straight regarding tech-driven fantasies:
“While some competing companies oversell a futuristic vision where everything works at the push of a button… operational reality is very different. There is noise in the data, sensors get jammed. Human lives are at stake, and that must not be forgotten.”
Beyond the technical dimension, automation raises deeply political questions. Arnaud Valli highlights the risk of desensitization to war:
“It is clear that systems are becoming increasingly autonomous, with humans simply ‘turning the key’ because they can no longer compute everything. But if China were to deploy a fleet of robots in the Pacific against a US fleet, would there not be far less political and social resistance to initiating a conflict, knowing that there are no direct human casualties?”
The uncomfortable truth is that hostile powers aren’t waiting for a consensus on the ethics of autonomous warfare, they’re already fielding it, pushing Western armed forces toward an essential doctrinal renewal. Who, human or machine, is legally responsible for an autonomous strike? What delegated authorities to open fire are permitted? It is along this narrow tactical and moral backbone that Defense Tech must evolve. “Our companies must work with the military, but also with theorists, to build a clear, defined framework capable of evolving,” concludes Arnaud Valli.
Part 4: The war of tomorrow is being shaped today
As institutional trust strengthens and private capital finally opens up to Defense Tech, the time for mere proof of concept is over. For these startups, the ambition goes beyond providing isolated technological building blocks: the goal is to establish themselves as the backbone of tomorrow’s armed forces.
The ultimate objective? To become the central infrastructure, the global operating system of European defense. A direct response to American technological ambitions, such as the highly interconnected Combined JADC2 military program. Martin de Gourcuff confirms this desire to be at the core of the system:
“There is real value in being at the heart of the data system. It is fully aligned with our ambition to build the operating system for command centers, the data infrastructure, and the communication systems of armed forces.”
To summarize this pursuit of interoperability and software dominance, Arnaud Valli offers a striking metaphor:
“Why do you think American Defense Tech giants use names drawn from Tolkien’s universe, like Palantir or Anduril? It’s one to rule them all. [...] In an ideal world, we would want a piece of software connected to everyone. The goal, within a European ecosystem, is to reach several billion in revenue. If we manage to shake up how procurement agencies operate in favor of end users, we will have achieved a great deal.”
This software sovereignty must also enable Europe to compensate for the shortcomings of its historical allies. Hadrien Canter sees this as a strong strategic signal: when the United States runs out of stock and can no longer supply Patriot missiles, the emergence of France and its ecosystem with agile and effective interception solutions shifts the balance on the global geopolitical chessboard.
But to turn this vision into reality and establish the continent’s security architecture, money and political will alone will not suffice. The true decisive factor of this decade will be human. “Join us to build concrete solutions and have an impact. There is work to be done, we are looking for motivated people committed to this mission-driven mindset,” says Hadrien Canter as a general call to action. The message is clear: tomorrow’s European sovereignty urgently needs the best talent of today.
Closing words
Writing this article opened our eyes to a world most of us instinctively turn away from: Defense, its mechanisms, its realities. A world opaque, sometimes uncomfortable but more urgent to understand than ever.
Because tomorrow’s threat may not yet have a name, but already exists. And while we look away, entire teams are coding, testing, iterating, sometimes just a few kilometers from the front line, so that Europe can defend itself in a world that did not wait for us to become hostile.
We thank Martin, Hadrien, and Arnaud for opening the doors of their offices to us. And we thank them even more now that we fully grasp the fight they carry every day, to protect the sovereignty of peoples. A fight that, ultimately, concerns us all.
If you enjoyed this edition, please consider sharing it with a colleague or friend who wants to understand how geopolitics and tech intertwine.
Did a friend forward this to you? Subscribe and get Mind The Gap directly in your inbox.
Best,
Agathe & Emma







