CLA | Ch. 4 — From Tool to Normative Agent
Failed to add items
Add to Cart failed.
Add to Wish List failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
The question is no longer whether machines can think. It is whether machines that make decisions with legal consequences can continue to be treated as simple objects.
Between Earth and Mars there are between 4 and 24 minutes of signal latency. Within that interval, an AI system may decide the fate of 120 people's life support. There is no time to consult anyone. There is no human to hand control back to. The system decides.
Is that decision the act of a tool? Of a person? Of neither?
This episode argues that the traditional dichotomy — persons versus things — is insufficient for twenty-first-century law. Space AI systems are a third category: algorithmic normative agents.
They are not persons: they have no moral conscience or intrinsic dignity. They are not tools: they do not execute deterministic instructions. They are limited centers of normative imputation — entities with autonomous decision-making capacity, specific responsibilities, and constitutive restrictions that no calculation can transgress.
Five conditions define them: they make autonomous decisions within defined domains, they operate under normative restrictions coded into their architecture, they generate legal consequences, they are auditable, and they admit human override.
The law has already built analogous categories: corporate personhood for entities without a mind, in rem actions in maritime law, autonomous vehicle regulatory frameworks. None is sufficient for space. All point in the same direction: the law can create new categories when reality demands it.
Reality in space demands it now.
—
📙 CLA: Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/0aGJioHm 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795