Your Showcase Primer: Deveillance and Radical Numerics
Safeguards for a dystopian future
🌟 Engineers — want to meet these two founders and eight others? This is your last chance to apply to attend our March 25th SF Startup Showcase!
The next wave of AI will bring extraordinary capabilities — and some truly unsettling ones. Systems that can design biology. Devices that can record and analyze conversations everywhere we go. These capabilities are arriving faster than most people expected, and they raise uncomfortable questions about privacy, safety, and control. But alongside those technologies, a new group of engineers is starting to build something just as important: the defenses.
Aida Baradari, founder of Deveillance
The year is 2028 and you’re sitting at a cafe doing an in-person meeting. The person you’re meeting with is wearing smart glasses that look completely normal. After the conversation ends you learn that everything you said was recorded, transcribed, and stored by an AI assistant running quietly in the background. You never agreed to it. You didn’t even know it was happening.
That world isn’t a science fiction future – it’s already happening. New wearable AI devices promise to capture and summarize entire conversations automatically. As speech recognition and large language models improve, your live conversations are turning into searchable, permanent data. What used to disappear at the end of a conversation can now live forever in someone’s cloud account.
That shift is exactly what Aida Bendari, the founder of Deveillance is pushing back against. The company is building a new category of counter-surveillance technology designed to give people control over whether their conversations become data. Their first device, Spectre I, detects nearby microphones and emits targeted cancellation signals that scramble what those microphones hear. To humans in the room, the conversation sounds completely normal. To recording devices, the speech becomes unintelligible.
The challenge is in having the system detect microphones, model how sound propagates through the environment, and generate interference signals precise enough to disrupt recording devices without disrupting the people actually speaking. It’s a problem that sits at the intersection of acoustics, signal processing, hardware design, and machine learning.
As AI systems get dramatically better at listening to and analyzing the world around us, technologies like this raise a deeper question. If every device can record everything, who gets to decide when it doesn’t?
Eric Nguyen, founder of Radical Numerics
A few years ago, a group of researchers at Stanford started asking a provocative question: what if the next breakthrough in biology didn’t come from a lab experiment, but from an AI model? That curiosity led Michael Poli, Daniel Fried, and collaborators to build Evo, one of the first large generative models trained directly on genomic data. Instead of predicting or analyzing DNA, Evo could actually generate entirely new genetic sequences. It could design proteins and genomes that had never existed before, launching us into a future where AI systems help invent biological tools the same way generative models create images or code.
If AI can design biology, the upside is enormous. It could accelerate drug discovery, create microbes that break down pollution, or develop new treatments for disease. But it also raises a scarier question: what happens when the ability to design biological systems becomes widely accessible? The same technology can be used to design dangerous bioweapons.
Imagine a hospital lab in NYC detecting an unusual infection. The symptoms look like a flu, but the genetic sequence doesn’t match anything in existing databases. Normally it could take weeks for researchers to understand what they’re dealing with. But if the pathogen was engineered with AI, those weeks might be the difference between a contained incident and a global outbreak.
Now imagine having an AI system that can rapidly simulate how that organism behaves—how its proteins fold, how it interacts with human cells, how quickly it spreads. Within hours it can suggest possible countermeasures, predict mutations, and identify drugs that might neutralize it.
That kind of rapid simulation is one of the motivations behind the work at Radical Numerics. If our adversaries can use generative AI to design biology, we need equally powerful systems capable of understanding and defending against it.
Training models like Evo requires pushing the limits of compute, designing new kernels and architectures, and building systems that can simulate biological processes at enormous scale. It’s the kind of problem where advances in algorithms, infrastructure, and hardware all matter at once. If you’re curious about how AI might move beyond image and text generation to model and simulate biology, this is a rare opportunity to hear from the researchers helping define that frontier.
The technologies that Deveillance and Radical Numerics are building may seem like part of a Black Mirror episode. AI systems that can design biological molecules. Devices that can silently record every conversation around us. But the tools themselves are neither good nor bad, and the future isn’t predetermined. These powerful technologies can lead to surveillance and risk, or to safety and empowerment. The founders of Deveillance and Radical Numerics are working on very different problems, but they share the same instinct: if the future looks a little scary, the best response is to start building systems that make it safer.
Apply to meet Aida Baradari, the founder of Deveillace, and Eric Nguyen, the founder of Radical Numerics at our next showcase on March 25th in SF!



