• CLA | Ch. 4 — From Tool to Normative Agent
    Apr 16 2026

    The question is no longer whether machines can think. It is whether machines that make decisions with legal consequences can continue to be treated as simple objects.

    Between Earth and Mars there are between 4 and 24 minutes of signal latency. Within that interval, an AI system may decide the fate of 120 people's life support. There is no time to consult anyone. There is no human to hand control back to. The system decides.

    Is that decision the act of a tool? Of a person? Of neither?

    This episode argues that the traditional dichotomy — persons versus things — is insufficient for twenty-first-century law. Space AI systems are a third category: algorithmic normative agents.

    They are not persons: they have no moral conscience or intrinsic dignity. They are not tools: they do not execute deterministic instructions. They are limited centers of normative imputation — entities with autonomous decision-making capacity, specific responsibilities, and constitutive restrictions that no calculation can transgress.

    Five conditions define them: they make autonomous decisions within defined domains, they operate under normative restrictions coded into their architecture, they generate legal consequences, they are auditable, and they admit human override.

    The law has already built analogous categories: corporate personhood for entities without a mind, in rem actions in maritime law, autonomous vehicle regulatory frameworks. None is sufficient for space. All point in the same direction: the law can create new categories when reality demands it.

    Reality in space demands it now.

    📙 CLA: Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/0aGJioHm 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show more Show less
    22 mins
  • CLA | Ch. 3 — The Founding Charter of the Escuela del Deber-Optimizar
    Apr 15 2026

    Technology is not neutral: it amplifies what we are. If we are just, it will amplify justice. If we are tyrants, it will amplify tyranny. Institutional design determines what gets amplified.

    This episode presents the foundational principles of the Algorithmic Common Law — the philosophical architecture that makes law possible in the cosmos.

    1. Anthropological Amplification: technology neither determines nor is neutral. It is an amplifier. The decisive question is not what technology does, but what it finds when it arrives. Space institutions must be designed to amplify the best of the human condition, not the worst.
    2. The Duty-to-Optimize / Validity by Critical Efficiency (VCE): the Ought-to-Be asks what the norm prescribes. The Duty-to-Optimize asks what works within the limits that dignity imposes. In environments where errors are irreversible, a norm that no one can verify is not a norm — it is a declaration.
    3. Sovereignty of Evidence: legitimacy does not derive from formal authority but from demonstrable evidence of results. Ends are political; means are empirical. Whoever controls the data can control the evidence — that is why IURUS exists.
    4. Algorithmic Dignity: there are thresholds that no optimization can transgress. A system that maximizes efficiency at the cost of human dignity is not efficient. It is broken.

    And the purpose that orients everything: Flourishing. Not mere survival. The expansion of capabilities to live a genuinely human life — even 300 million kilometers from home.

    📙 CLA: Algorithmic Law for the Cosmos Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/0aGJioHm 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show more Show less
    23 mins
  • CLA | Ch. 2 — Classical Legal Architecture Against the Cosmic Void
    Apr 8 2026

    Kelsen presupposed territory. Hart presupposed community. Dworkin presupposed time. Luhmann presupposed closure. Space eliminates all four.

    Chapter 2 of CLA examines the four dominant theoretical architectures of twentieth-century law — Kelsen's normativism, Hart's analytical positivism, Dworkin's interpretivism, and Luhmann's systems theory — and demonstrates that they do not face correctable flaws, but structural obsolescence.

    The distinction is crucial: a correctable flaw can be resolved without altering the foundations of the theory. Structural obsolescence occurs when the failure lies in the conditions of possibility of the theory itself. It is not a building with cracks: it is a building constructed on ground that has disappeared.

    The chapter incorporates the diagnosis of the IISL Working Group on Legal Aspects of AI in Space (Yazici et al., 2024) — a 267-page report published in December 2024 — concluding that existing legal frameworks are insufficient to govern autonomous systems in space environments.

    Only by identifying precisely where and why existing theories collapse can we build alternatives that avoid reproducing their limitations.

    📖 CLA: Algorithmic Law for the Cosmos — Volume I

    Jesús Bernal Allende | School of Duty-to-Optimize and Sovereignty of Evidence

    https://a.co/d/0aqO3T6K

    🌐 https://edo-os.com

    🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show more Show less
    23 mins
  • CLA | Ch. 1 — Space as a Rupture of the Legal Paradigm
    Apr 6 2026

    Westphalia was an engineering solution, not an eternal truth. Space is the environment where that engineering stops working.

    Chapter 1 of CLA: Algorithmic Law for the Cosmos argues that outer space is not simply a new domain for existing law — it is the catalyst for a paradigmatic crisis that reveals the structural limits of the modern legal system.

    The chapter introduces the concept of territorial proxy obsolescence: territory was always a technology of control, not an essence. A technology that proved optimal for three centuries under specific conditions — limited human mobility, geographically fixed resources, predominantly physical wealth. In space, that technology becomes entirely obsolete.

    The episode examines three scenarios of state transfiguration — the Algorithmic Protectorate, the Infrastructure Federation, and Distributed Functional Sovereignty — and establishes the central thesis: the transition of humanity toward a multiplanetary species requires not adapting terrestrial law, but transfiguring it.

    📖 CLA: Algorithmic Law for the Cosmos — Volume I

    Jesús Bernal Allende | School of Duty-to-Optimize and Sovereignty of Evidence

    https://a.co/d/0aqO3T6K

    🌐 https://edo-os.com

    🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show more Show less
    23 mins
  • CLA: Algorithmic Law for the Cosmos | The Void No Treaty Can Fill |
    Mar 31 2026

    In September 2022, Elon Musk unilaterally decided not to activate Starlink over Crimea. No tribunal. No appeal. A private individual exercised veto power over a sovereign state's military operation — and the world had to accept it.

    That decision was not illegal. The problem is that no rule existed to prohibit it.

    In this episode, we open CLA: Algorithmic Law for the Cosmos — the book that diagnoses the silent collapse of a legal paradigm designed for a world that no longer exists, and proposes an alternative: the Algorithmic Common Law.

    We walk through the Prologue, Preface, and General Introduction of Volume I:

    • Why the 1967 Outer Space Treaty is structurally incapable of governing corporations that control satellite constellations
    • Why space law's problems are not future problems — they already exist, and the missions that will make them critical are in development
    • What the Algorithmic Common Law (CLA) is, and why it is neither law made by machines nor science fiction
    • The transition from Duty-to-Be to Duty-to-Optimize: how verifiable efficiency becomes a condition of normative validity in environments where inefficiency is lethal
    • Critical Efficiency Validity (VEC), Sovereignty of Evidence, and Algorithmic Dignity as the three pillars of the new paradigm

    The four CLA institutions — THEA, IURUS, EVIDEN, and OACRA — and why they are not utopia but applied institutional engineering for real problems.

    The future of space law is not predetermined. But neither is it open indefinitely.

    School of the Duty-to-Optimize and Sovereignty of Evidence

    📖 CLA Vol. I: https://a.co/d/0c38AaFL 📖 CLA Vol. II: https://a.co/d/03zXi0Sv 🌐 deber-optimizar.mx A production of EDO·OS

    Show more Show less
    21 mins
  • OACRA | Ch. 4 — Theoretical-Normative Framework: Foundations of Algorithmically Augmented Democracy
    Apr 16 2026

    If ambition must counteract ambition, what will counteract the algorithm?

    In September 2024, the Mexican Senate approved the most sweeping judicial reform in decades. In under fifteen days. Without technical impact analysis. Without any mechanism to make the trade-offs visible before the vote. The consequences arrived afterward, when the law was already in force.

    This episode builds the theoretical foundation that justifies OACRA, integrating four intellectual traditions:

    1. Democratic theory: OACRA operates within Sen's conception of democracy as public discussion. It is not direct digital voting. It is the augmentation of the deliberative capacities of the existing representative legislature.
    2. Institutional design: OACRA is a fourth check in the Madisonian architecture, analogous to constitutional courts for constitutionality and to independent budget offices for fiscal projections. A technical check does not replace the democratic one — it makes it possible.
    3. Institutional economics: the failures diagnosed are not personal — they are institutional. Legislators rationally respond to the incentives they face. OACRA makes legislating poorly more politically costly — not impossible.
    4. Philosophy of technology: every technical design decision is a political decision materialized in code. Infrastructural bias is not encoded in the algorithm — it is encoded in the infrastructure that supports it.

    And one mathematical limit that grounds everything: Chouldechova (2017) proved that perfect algorithmic fairness is a logical impossibility. That is why OACRA requires a Parliament of Models. Not a single model.

    📘 OACRA — Algorithmic Office for Enhanced Regulatory Quality Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/09Xzy0z8 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show more Show less
    22 mins
  • OACRA | Ch. 3 — Lessons from the World: International Experiences in Institutional Innovation with AI
    Apr 15 2026

    Copying a model is the fastest way to import its flaws. Extracting principles is the slowest way to build something that works.

    This episode examines five international experiences —three successful with limitations, two failed— to extract what Latin America can and cannot replicate.

    1. Estonia (X-Road): over 2.7 billion annual queries with immutable logging. Radical transparency that builds trust where none existed. Non-replicable condition: 25 years of sustained investment, a population of 1.3 million, and cross-party political consensus.
    2. Taiwan (vTaiwan): consensus among antagonistic stakeholders through mass digital deliberation. 80% of its processes led to government action. Negative lesson: voluntariness kills institutional innovation. It declined in 2018 without a formal mandate.
    3. European Parliament: AI legislative tools with limited impact due to institutional resistance. Technology without cultural change produces underused tools.
    4. Chile (Government Laboratory): a network of 27,000 specialists in public innovation. It accelerated technological adoption but without direct citizen participation and with high dependence on the political cycle.
    5. Kenya (Huduma Namba) and India (Aadhaar): the absence of safeguards generates massive exclusion and irreversible harm. The bias is not in the algorithm — it is in the infrastructure that supports it.

    The pattern is unequivocal: success does not depend on technical sophistication. It depends on institutional safeguards proportional to the risks of capture.

    📘 OACRA — Algorithmic Office for Enhanced Regulatory Quality Jesús Bernal Allende | Escuela del Deber-Optimizar y la Soberanía de la Evidencia https://a.co/d/09Xzy0z8 🌐 https://edo-os.com 🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show more Show less
    24 mins
  • OACRA | Ch. 2 — Five Structural Failures in Latin American Legislative Governance
    Apr 9 2026

    The talent exists. The data exists. The warnings exist. What does not exist is an institutional architecture connecting them to the legislative decision.

    This episode diagnoses five structural governance failures in Latin America that justify intervention through AI-augmented institutional design:

    1. Perverse incentive systems: short-term electoral rationality rewards policies with visible benefits before the next election, even when they generate unsustainable deferred costs.

    2. Technical capacity asymmetry: while the Congressional Budget Office operates with 275 analysts and over $70M annually, Latin American legislative technical offices operate on a fraction of that.

    3. Structural opacity: legislative modifications without individual accountability traceability — the 2024 Mexican judicial reform as a devastating illustration.

    4. Informational fragmentation: AI initiatives in Mexico that ignored simultaneous experiences in Brazil and Chile.

    5. Absence of continuous update mechanisms: accumulation of obsolete legislation.

    The diagnosis avoids two traps: external victimization and negative exceptionalism. The failures are of institutional design — and therefore correctable.

    📖 OACRA — Algorithmic Office for Enhanced Regulatory Quality

    Jesús Bernal Allende | School of Duty-to-Optimize and Sovereignty of Evidence

    https://a.co/d/09Xzyoz8

    🌐 https://edo-os.com

    🔗 https://www.linkedin.com/in/jesus-bernal-allende-030b2795

    Show more Show less
    23 mins