Context: I co-founded a philosophy and neuroscience research institute and designed the high-level logic for several neurotech devices.
An underappreciated aspect of neurotech is we lack a strong “ownable computing” model, particularly for implanted systems. By “ownable computing” I mean two things:
Owning a system requires owning the private keys and source code
The cryptographic perspective is you can own a system if, and only if, you control the private keys that give ultimate ‘root access’ to its hardware and software. Cryptocurrency enthusiasts like to say “not your keys, not your coins” — meaning you don’t truly own something like a Bitcoin unless you control the cryptographic private keys to that Bitcoin. In a similar vein, the Free Software movement believes you only truly own software you have the source code for, because ownership implies the ability to take something apart and put it back together differently. This definition of ownership raises some interesting questions: if you have a smart door lock from Nest (Google) or Ring (Amazon), and they can lock you out by sending out a software update, who really owns your house?
Owning a system requires understanding it
The second requirement for owning a system is that it must be simple enough to be ownable. Linux is distributed under the GPLv2 open-source license and if you don’t like something you’re free to look at how it works and change it; in this sense it’s wonderfully ownable. But the Linux kernel is also 30 million lines of code and has countless moving parts. This is far beyond the human limit of full understanding or holistic auditing, and leads to actual ownership of the system being less concentrated in the user, and more diffused across the process that built the system, any actors which have secret knowledge of the system, and any forces that can put pressure on these processes and actors such as large corporations and governments.
Technological ownability matters for your home, your car, your phone. But it especially matters for your brain. I’m hugely optimistic about the promise of advanced neurotechnology, but we seem to be sleepwalking into a situation where ownable neurotech may not happen on its own. And the stakes are high enough such that advanced neurotech that is not strongly ownable and does not actively defend its users’ security and sovereignty may essentially turn out to be slavery neurotech.
Putting energy into worrying about this is probably counterproductive, feeding bad futures. And this problem of maintaining personal sovereignty in an age of advanced neurotech is complex enough that there will never be a single solution that cuts the entire knot. But if there are technological platforms that are built around the ideals of ownability and sovereignty, we should support, develop and build on them to prepare for a better future. This is the path that led me to Urbit, a topic for another post.
 This is deeply complicated by the fact that as a social species, we don’t have full sovereignty over our brains to begin with — and as the Buddhists might say, “who’s ‘we’, anyway?” A proper defense and augmentation of personal sovereignty will require new forms of understanding personal identity and social interactions.
 Schmitt famously suggested that “sovereign is he who decides on the exception.” The extent to which people with advanced neurotech should have fine-grained control over their own brains (and which subagents within a brain should be prioritized/empowered) is a very complex question. But I would strongly suggest that any technology that deeply interfaces with the brain should be built on a technology stack that allows the possibility of sovereignty / focused ownership.
Acknowledgements: Thank you to Neal Davis, Jōshin Steven Dee, Galen Wolfe-Pauly, Josh Lehman, and Vita Guttmann for comments.