Portrait of Quentin de Laroussilhe

Explorations

Occasional reflections on technology and curiosity. Not finished work, just fragments of how I’m thinking about things. Views are my own.

Coming full circle.

From Evolving Neural Nets to Evolving DNA

In 2011, I got into machine learning through evolutionary algorithms. I was fascinated by the idea that we could simulate evolution to design better neural networks. I spent evenings experimenting with meta-heuristics, simple perceptrons, and evolutionary search applied to games and image compression. Most of it didn’t work well, but the concept stayed with me: learning systems could evolve their own structure.

Years later, from 2017 to 2022, I worked on Neural Architecture Search (NAS) and AutoML, following the line of work opened by Quoc Le and Barrett Zoph and later extended with evolutionary methods such as EvoNAS. Those principles were eventually productionized at scale, with architecture search and automated model optimization reaching real-world applications. Looking back, that period was about one idea: turning design itself into an optimization problem.

While exploring applications of ML outside traditional tech, I came across Brink Therapeutics, a biotech company using directed evolution and machine learning to engineer enzymes for precise, in-vivo DNA edits. I met their team in Paris. They walked me through their workflow, and I immediately recognized patterns that felt familiar: generation, evaluation, feedback — the same structure as AutoML, but in wet-lab form.

I had imagined biological experimentation as slow and manual: pipettes, plates, and measurement iteration. What they described was the opposite: a highly parallelized system.

  1. Candidate generation They start by designing a large library of enzyme variants and synthesizing them together in a single batch, a population initialization step.
  2. Droplet partitioning The mixed solution is divided into millions of microdroplets. Each droplet, with high probability, contains just one enzyme variant and a DNA segment to edit. Every droplet becomes a tiny, isolated experiment.
  3. Batch selection After the reactions occur, the system applies a unified selection pressure across the entire population, evaluating millions of variants collectively within a single screening cycle.

That last step caught my attention. They weren’t testing one droplet at a time. They could apply a single selection pass over the entire pooled experiment — the biological equivalent of computing a global loss across a minibatch, evaluating millions of experiments simultaneously and updating based on the aggregate signal. It was the first time I had seen batching implemented in matter instead of code.

The rest of their pipeline echoed the logic I’d seen in AI: guided search replacing random mutation, classifiers predicting properties before wet-lab testing, automation and hardware driving down cost, and models generalizing across protein families, much like transfer learning in vision or language models. Different substrate, same pattern: close the loop between generation, evaluation, and learning.

I’m not a biologist, and I don’t pretend to be. What stayed with me was the symmetry. Brink is using evolution to design enzymes that can, in turn, edit DNA — methods that refine the very substrate they act on. For someone who started out evolving neural networks, watching evolution applied back to biology felt like a quiet completion of an idea. Two fields, years apart, converging on the same optimization logic. That recognition, more than the science itself, is what convinced me to back the team.

References

  • Zoph, B., & Le, Q. (2016). Neural Architecture Search with Reinforcement Learning.
  • Real, E. et al. (2018). Regularized Evolution for Image Classifier Architecture Search.
  • Yang, K.K., Wu, Z., Arnold, F.H. (2019). Machine-learning-guided directed evolution for protein engineering.

Early lessons.

From curiosity to conviction

Back in my student days, I wrote a short blog post about Theano, one of the early deep learning frameworks. A few days later, Rand Hindi, a Parisian entrepreneur messaged me. He said he had some geospatial datasets and was gathering tinkerers for a weekend project.

I half-joked that if it was a disguised recruiting attempt, it was a lost cause, I’d already signed an offer.

He laughed and said he just wanted to have fun.

That weekend, I found myself surrounded by scientists from France’s top universities. The depth of discussion was miles ahead of anything I’d seen at EPITA, and I quickly realized I was learning faster there than in any lecture hall. Projects ranged from turning a phone’s barometer into a sensor for tracking a user’s position underground without GPS, to ray-casting sunlight over 3D OpenStreetMap data to predict which Parisian terraces would stay sunny throughout the day.

A few months later, when I told my director of studies I’d landed an internship at Google, his only concern was that I’d come back afterward. It was the first time I realized not coming back was even an option.

That moment reshaped everything. I decided to double down and surround myself with people who made me learn faster. Mathematicians and data scientists I met through Rand’s startup were magnetic and I decided to hang out at their office, lurking for knowledge. They accelerated my understanding of data science and machine learning far beyond the curriculum.

Over the years, I started following Rand’s thinking paradigm on privacy technologies. At Snips, he was building a private-by-design voice assistant, conceptually parallel to what Siri and Alexa were doing at the time, except his edge was privacy. Snips was later acquired by Sonos, validating that approach.

When he told me he was starting a new venture, Zama, focused on enabling privacy-preserving computation, I was immediately drawn in. His vision was to turn zero-knowledge proofs, multi-party computation and homomorphic encryption into a protocol that could enable secure collaborative training of machine learning models across institutions and inference over sensitive data.

I told him right away: I want a seat. That became my first angel check, and I accidentally invested in a unicorn.

I didn't have a framework yet, but in retrospect, Zama checked every box of what I'd later formalize. I’ve been searching for signals I could extract, as a compass of sorts.

  • Technically ambitious vision Projects that sound impossible but rest on solid foundations. Founders who walk a fine line, humble enough to respect the science but arrogant enough to believe they can solve what others couldn't.
  • Validated through concentration of talent Founders who attract exceptional technical profiles, creating both validation and grounding for their ambitious ideas.
  • With market inevitability Domains where impact is obvious and the main risk is technical, not commercial. Zama’s path shifted from AI cloud to blockchain, proving that truly fundamental problems create their own markets.

Curiosity has been the common thread. It pulled me into that first weekend project, into conversations that accelerated my learnings, and later into backing founders exploring the edges of what’s possible.