In 2011, I got into machine learning through evolutionary algorithms. I was fascinated by the idea that we could simulate evolution to design better neural networks. I spent evenings experimenting with meta-heuristics, simple perceptrons, and evolutionary search applied to games and image compression. Most of it didn’t work well, but the concept stayed with me: learning systems could evolve their own structure.
Years later, from 2017 to 2022, I worked on Neural Architecture Search (NAS) and AutoML, following the line of work opened by Quoc Le and Barrett Zoph and later extended with evolutionary methods such as EvoNAS. Those principles were eventually productionized at scale, with architecture search and automated model optimization reaching real-world applications. Looking back, that period was about one idea: turning design itself into an optimization problem.
While exploring applications of ML outside traditional tech, I came across Brink Therapeutics, a biotech company using directed evolution and machine learning to engineer enzymes for precise, in-vivo DNA edits. I met their team in Paris. They walked me through their workflow, and I immediately recognized patterns that felt familiar: generation, evaluation, feedback — the same structure as AutoML, but in wet-lab form.
I had imagined biological experimentation as slow and manual: pipettes, plates, and measurement iteration. What they described was the opposite: a highly parallelized system.
- Candidate generation They start by designing a large library of enzyme variants and synthesizing them together in a single batch, a population initialization step.
- Droplet partitioning The mixed solution is divided into millions of microdroplets. Each droplet, with high probability, contains just one enzyme variant and a DNA segment to edit. Every droplet becomes a tiny, isolated experiment.
- Batch selection After the reactions occur, the system applies a unified selection pressure across the entire population, evaluating millions of variants collectively within a single screening cycle.
That last step caught my attention. They weren’t testing one droplet at a time. They could apply a single selection pass over the entire pooled experiment — the biological equivalent of computing a global loss across a minibatch, evaluating millions of experiments simultaneously and updating based on the aggregate signal. It was the first time I had seen batching implemented in matter instead of code.
The rest of their pipeline echoed the logic I’d seen in AI: guided search replacing random mutation, classifiers predicting properties before wet-lab testing, automation and hardware driving down cost, and models generalizing across protein families, much like transfer learning in vision or language models. Different substrate, same pattern: close the loop between generation, evaluation, and learning.
I’m not a biologist, and I don’t pretend to be. What stayed with me was the symmetry. Brink is using evolution to design enzymes that can, in turn, edit DNA — methods that refine the very substrate they act on. For someone who started out evolving neural networks, watching evolution applied back to biology felt like a quiet completion of an idea. Two fields, years apart, converging on the same optimization logic. That recognition, more than the science itself, is what convinced me to back the team.
References
- Zoph, B., & Le, Q. (2016). Neural Architecture Search with Reinforcement Learning.
- Real, E. et al. (2018). Regularized Evolution for Image Classifier Architecture Search.
- Yang, K.K., Wu, Z., Arnold, F.H. (2019). Machine-learning-guided directed evolution for protein engineering.