Philip Nitschke, creator of the controversial ‘Sarco’ death capsule, proposes using AI instead of psychiatrists to assess a person’s capacity for assisted death.
From Right to Death with Dignity to Algorithm That Decides for Humans
For some, this is a defense of absolute individual freedom. For others, it’s a dangerous social experiment where technology enters an area even the state has no access to. Philip Nitschke, a former doctor, humanist, writer and activist, has been repeating one sentence for over 30 years: the decision about death belongs to the person, not the doctor.
First, he fought for the legalization of euthanasia for terminally ill people. Over time, his position has hardened. Today he no longer talks about illness, but about the right to self-determination, regardless of diagnosis. At this point, technology enters the stage.
What is the Sarco Capsule and Why Its Use Caused an International Storm
Sarco is a capsule resembling a futuristic vehicle. A person entering it themselves initiates the process leading to death – without a doctor, without medical staff, without needles. The project had one goal from the beginning: to eliminate the human from the role of “the one who gives consent”.
The construction causes not only ethical but also legal opposition. When the capsule was used in Switzerland in 2024 by a 65-year-old American woman, the reaction was immediate: investigation, arrests, international criticism. And one fundamental question: who bears the responsibility?
“Psychiatrists Disagree” – That’s Why Sarco Creator Wants to Replace Them with an Algorithm
However, the most controversial thing is not the capsule itself – we’ve known about it for several years. The most emotions are caused by the creator’s new idea. According to current regulations in countries that permit assisted dying, the key is the assessment of mental capacity. This is usually done by psychiatrists.
Nitschke claims this system is flawed. His arguments are simple: different doctors give different opinions, and decisions often depend on the specialist’s worldview. Thus, the whole process is unpredictable and unequal. In his vision, this role would be taken over by an algorithm based on artificial intelligence.
How AI Would Decide if Someone Has the “Capacity” to Die
According to announcements, the user would talk to a digital avatar. The system would ask questions, analyze responses, the way of speaking, coherence of thought. Based on this, AI would assess whether a person has so-called decision-making capacity.
If so – a 24-hour “decision window” opens. After its expiration, the procedure must be repeated from the beginning.
Why the Algorithm Might Make a Mistake in Assessing a Decision About Death
Critics point out something else: AI learns from past data. And these contain biases, gaps and inequalities. In this context, the question arises whether the algorithm can distinguish a conscious decision from a psychological crisis. Automation experts are sounding the alarm: entrusting such power to a system whose actions cannot be fully explained may weaken, rather than strengthen, human autonomy.
Why the Discussion About AI and Decisions About Death is Erupting Now
The moment is not accidental. In recent years, AI has increasingly entered relationships with people in crisis. Court cases have already appeared in which families accuse chatbots of not reacting to suicidal signals. In one high-profile case, OpenAI was in the background, although the technology itself was not designed for such conversations. All this makes the proposal for AI not only to talk but to decide land on exceptionally sensitive social ground.
Europe Says “Stop”, But the Idea Doesn’t Disappear
Although even in liberal Switzerland the Sarco project has been questioned, its creator is not backing down. He announces new versions of the capsule, including a model for two people. Simultaneously, AI systems are being developed that – for now at least – are to work “alongside” doctors, not instead of them. This is, however, only a transitional stage.
The goal is clear: full automation of decisions.
What It Means for a World Where Algorithms Make Boundary Decisions
The discussion around Sarco and AI is not only about euthanasia. In the background, questions appear that will return with other technologies:
This is why the case goes far beyond activist circles. Even if the Sarco capsule never becomes widely used, the idea has already begun to live its own life. More and more boundary decisions – credit, treatment, dismissal from work – are made by algorithms. Death was until now the last boundary. Today, someone is trying to move it.

