This is the problem of identity, and the philosophers debate continues. If you can substitute in place, one cell at a time with a machine, as Descartes wrote, then eventually you could be replaced with a machine. Descartes believed everything then was rational and man was a machine. We lived under that idea for a while - and still do - but Godel proved it wrong. The universe is not all rational, and the Incompleteness Theorem proves it.
No, it doesn't. The Incompleteness Theorem is about mathematical logic, not about the universe, or reality. It's like saying that something cannot happen because it would violate the rules of chess (or poker, for that matter).
You are making the assumption that a tiny computer has the capability of replacing anything sentient. That's my problem with this. It simply can't. We can plug electrodes into our brain and use prosthetics, but that's not the same. There is no tiny computer that can replace a single ganglion, or a set of nerves that work in tandem. That's science fiction.
Now I lost you. First off, Is a single neuron sentient? Is an E.Coli? Why? (please don't answer "because it collapses wave functions" or we are back to square one).
Simple thought experiment. Imagine that I undergo some exotic surgery in which a few of my brain cells get replaced with a tiny computer emulating exactly the behavior of those cells. Same connections, same reaction to stimuli, same timing.
The next day, a few more cells get replaced, and so on and on until my brain is completely made of silicon and firmware.
Would I still be intelligent at this point?
If no, care to guess when and why I ceased to be?
Looks like the \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\usb \stick PC's are already on sale in the UK, then I saw this....
A Full PC inside a mouse. Sweet, if it has an i7 with 16GB ...
What are you supposed to do for the keyboard, though? It's hard to go unnoticed when there are two cables protruding out of your mouse...
Seriously, bonus points for thinking out of the box, but I cannot think of any reasonable situation where a smartphone wouldn't be more convenient.
It depends. Just because your program runs on the bare metal it doesn't mean it's an OS. As a bare minimum it would also have to define an API that other programs can use, right?
My point is that when the application domain is very well defined, one can use the extra knowledge to make assumptions that no general purpose OS would be allowed to make.
That produces devices that run faster and/or burn less energy. And might even be more secure, if anything because there are fewer moving parts, but that's not always a given.
"I hope you like Harry Potter references, because I made seven backups."
In the cloud, obviously.
Seriously though, the point is we tend to imagine an AI as some sort of human replica made of silicon and bits, which is reflected in the Turing Test.
The problem is that the single attribute of immortality (not in the strictest sense of the word, but close enough) would make any AI as inhuman as anything we can possibly imagine, except maybe for some obscure deity.
We wouldn't share the same needs, goals, priorities, heck we wouldn't even be competing for the same resources, with the possible exception of energy. And even if we did, the AI wouldn't even need to compete: it can just slow down in a corner and wait patiently until we finally go extinct.
Pretty much like a teenager.