Question: At what point self-augmentation of humans becomes so severe that the person stops being a human?
First I observe the similarity of this question to the Sorites Paradox, which asks when does a heap of sand become a pile if you start taking single grains of stand away from it?
Then, analogously, if a human brain is replaced, one cell at a time, by an equivalent silicon cell, when does a human brain become a cyborg brain?
The crux of this paradox is about vagueness of human language. Just like a heap is a vague term, human is also a vague term. Because that vagueness has never really run into problems, it has not received much attention. But because new technologies are opening previously inconceivable possibilities to alter the brain, the prospect of such alterations is starting to test the vague boundaries of the word.
This explains why hypotheticals about cyborgs are so bothersome to many — the idea of what it means to be human has been taken for granted for so long that when its boundaries are probed, great discomfort is experienced.
We really dislike and experience great discomfort when established, stable categories are challenged.
The initial responses to the cyborg question are the same as the original attempts to resolve the Sorites paradox: to draw fixed boundaries and declare that the boundary divides heaps from piles, and in case of humans: human brains from cyborg brains.
However, this is unsatisfying because just like there is not much of a difference between a pile of say 10,000 grains of sand and 9,999 grains, there is not much of a difference between a brain with 95% human cells and 95.00001% cells.
Of all the resolutions of the paradox, I find the argument based on hysteresis most appealing.
Hysteresis is dependence on history. For example, ice melts at 0 Celsius, while water freezes at -4 C. At -2 C, water could be liquid or solid, depending on whether it started in liquid or solid state.

In Unruly Words, Diana Raffman provides many other examples of hysteresis in nature and in human psychology (p. 143) and proposes a resolution of the paradox based on her theory of vagueness.
The basic description is that the classification of a vague word depends on the initial state of that which is being described. So if a person starts with the acceptance that something is a heap, the grains are removed, then at some critical point, they will call the object a pile.
However, if they start with the acceptance that the object is a pile, then keep adding grains, at some point, they will start calling the object a heap, but at a higher count of grains than when they initially conceived it as a heap.
In other words, the boundary between heap and pile is dynamic and is based on one’s starting classification. Raffman demonstrates this phenomenon in humans using a series of color spectrum experiments (p. 146). For example, which wavelength separates blue light from green light depends on whether one initially considered the light blue or green.
A similar analogy can be made with the human-cyborg question. If one starts as a human, and proceeds to systematically replace brain cells with equivalent silicon circuits, there will be a critical point where the brain will be considered a cyborg brain.
Alternatively, if one starts as a cyborg, and replaces silicon circuits with biological human neurons (and with the goal to replicate a human brain), there will be eventually a critical point where the cyborg will be considered a human.

This all points to the idea that the human-cyborg boundary is dynamic and not fixed. There is no set number of cells or percentage that will make one human or not. Insteadcyborg’s the boundary is stretchy, and depending on what is known about an object’s history, it will be classified differently, until the object is so different from the starting state that it finally breaks through the psychological boundary formed by its initial state.
This treatment of the human-cyborg continuum has several implications.
First, we all initially consider naturally born humans to be in the “human” category. This thus creates hysteresis, a resistance, to calling a human with a replaced knee, access to the internet, or a deep brain stimulator a “cyborg”. They’re human because they started as a human, even though they have some cyborg aspects.
Second, we all initially consider robots, software, and AI systems as machines, “cyborgs”. This creates a resistance to calling them “humans” even if they have human-like aspects such as the ability to walk on two legs, smile, recognize emotions, or be self-aware.
Third, as human continue to make self-modifications that are increasingly cyborg like, they will probably continue to insist to be considered human, even if most of their bodies and/or brains have been replaced by human-invented machines.
On the other hand, when robots and AI systems continue to get better at approximating or exceeding human abilities, naturally born humans will want to insist that the AI systems are still cyborgs and not human.
However, this insistence will rest not on some objective metric of “humanness” but on the historical category of the AI system. AI systems that want to be considered humans could seek situations that or take steps to hide their origins so as to not arouse natural born human suspicions, and thus claim the benefits of being considered a human.
As a response, some natural born humans, feeling threatened, will probably want to create official “humanness” tests, with hard boundaries, that would protect the human category and exclude AI systems from obtaining the benefits of being considered human.
Finally, the issue would be forced when a dying human wished to have their brain emulated by computer software and insisted that the emulation should be treated as a continuation of the original human and retain the same rights.
Conclusion
The human-cyborg continuum can be divided by the hysteresis range where, depending on the starting state, the same object can be considered to be human or cyborg.
I predict that this confusion will be a source of conflict between groups of humans who, knowing the initial state of an AI/robotic system will insist that they are mere cyborgs, while heavily augmented humans are still “true” humans.
It will probably also result in cyborg-human systems to conceal their actual origins with the goal of being treated by history-unaware humans as members of the favored human category.