Coffeehouse Thread

34 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

Interesting AI discussion on Slashdot

Back to Forum: Coffeehouse
  • User profile image
    Bass

    http://hardware.slashdot.org/story/10/02/10/2323248/When-Will-AI-Surpass-Human-Intelligence?art_pos=4

    One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings...can be done by computers for pennies an hour,"

     

    I kind of agree with this. The goal of computer science as I see it, is to end human labor forever.

  • User profile image
    Charles

    Hard to say. What's the "ASM" for an original idea (algorithm) that is based on both pure abstraction, chaos and experiential data?

    C

  • User profile image
    Cream​Filling512

    Doubt it, the AI field has a reputation for making terrible predictions.  Don't see any technology path to strong AI at the point, no one has a clue.

  • User profile image
    rhm

    I can see computers eventually getting good at the turing test (in fact I'm so surprised that they aren't already, I'm kindof tempted to have a crack at it myself), but there's a big difference between that doing useful work or anything that you might call 'intellectual'.

     

    There are big limitations on what a computer can do in any case - it's not just a case of 'we'll write better software and make faster computers and it'll all be ok'. It's provable that a program cannot produce a program that is greater in complexity than itself. Thus you cannot have AI automatically getting more sophisticated by itself (the Skynet situation). That's not to say AI techniques cannot produce more interesting and useful systems and services, but I find it frustrating that personalities are still willing to make 30 year predictions that assume things will just get solved because people are working on them.

     

  • User profile image
    Bass

    rhm said:

    I can see computers eventually getting good at the turing test (in fact I'm so surprised that they aren't already, I'm kindof tempted to have a crack at it myself), but there's a big difference between that doing useful work or anything that you might call 'intellectual'.

     

    There are big limitations on what a computer can do in any case - it's not just a case of 'we'll write better software and make faster computers and it'll all be ok'. It's provable that a program cannot produce a program that is greater in complexity than itself. Thus you cannot have AI automatically getting more sophisticated by itself (the Skynet situation). That's not to say AI techniques cannot produce more interesting and useful systems and services, but I find it frustrating that personalities are still willing to make 30 year predictions that assume things will just get solved because people are working on them.

     

    It's provable that a program cannot produce a program that is greater in complexity than itself. Thus you cannot have AI automatically getting more sophisticated by itself (the Skynet situation).

     

    Where is this proof? 

  • User profile image
    Andor

    rhm said:

    I can see computers eventually getting good at the turing test (in fact I'm so surprised that they aren't already, I'm kindof tempted to have a crack at it myself), but there's a big difference between that doing useful work or anything that you might call 'intellectual'.

     

    There are big limitations on what a computer can do in any case - it's not just a case of 'we'll write better software and make faster computers and it'll all be ok'. It's provable that a program cannot produce a program that is greater in complexity than itself. Thus you cannot have AI automatically getting more sophisticated by itself (the Skynet situation). That's not to say AI techniques cannot produce more interesting and useful systems and services, but I find it frustrating that personalities are still willing to make 30 year predictions that assume things will just get solved because people are working on them.

     

    A program can produce a program more complex than itself. It is trivial to write a program that brute force outputs every combination of assembly instruction, which will given infinite time, result in every single program that you can possibly write/run on that computer.

     

    Now validating which of those programs is correct and does what you want, results in the halting problem.

     

    And how do you quantify the complexity of a program? If the brute force assembly program can produce every other program possible, it must be the most complex program?

  • User profile image
    Bass

    Andor said:
    rhm said:
    *snip*

    A program can produce a program more complex than itself. It is trivial to write a program that brute force outputs every combination of assembly instruction, which will given infinite time, result in every single program that you can possibly write/run on that computer.

     

    Now validating which of those programs is correct and does what you want, results in the halting problem.

     

    And how do you quantify the complexity of a program? If the brute force assembly program can produce every other program possible, it must be the most complex program?

    I don't think it's worth making mathematical assumptions on what AI is. It's very hard to define intelligence or sentience in the first place, let alone formalize and quantify it.

     

    Notice that the statement I said doesn't even go there, the question was I think is more interesting is "can machines replace human labor", we know already for something things, this is definitely "YES!". But "can machines replace all human labor?", and if they can't, what careers of human labor can they not replace, and why?

  • User profile image
    rhm

    Bass said:
    rhm said:
    *snip*

    It's provable that a program cannot produce a program that is greater in complexity than itself. Thus you cannot have AI automatically getting more sophisticated by itself (the Skynet situation).

     

    Where is this proof? 

    The Emperor's New Mind by Roger Penrose covers it in detail. As Andor anticipates, it is related to the halting problem - you can generate whatever you like, but your program cannot determine, even to the extent a human can, whether the generated program is correct.

  • User profile image
    Bass

    rhm said:
    Bass said:
    *snip*

    The Emperor's New Mind by Roger Penrose covers it in detail. As Andor anticipates, it is related to the halting problem - you can generate whatever you like, but your program cannot determine, even to the extent a human can, whether the generated program is correct.

    I didn't read the book (it'll have to go on my "todo" list), but I am pretty very sure: "Penrose states that his ideas on the nature of consciousness are speculative."

     

    I don't think this book really disproves Ray Kurzweil's et al. notion that general AI which can improve itself is not only possible but inevidable.

  • User profile image
    Cream​Filling512

    Software can improve on itself, neural nets that are trained to recognize handwriting recognition improve as they are exposed to more data.

  • User profile image
    rhm

    Bass said:
    rhm said:
    *snip*

    I didn't read the book (it'll have to go on my "todo" list), but I am pretty very sure: "Penrose states that his ideas on the nature of consciousness are speculative."

     

    I don't think this book really disproves Ray Kurzweil's et al. notion that general AI which can improve itself is not only possible but inevidable.

    That quote refers to another section of the book where Penrose argues that a digital computer cannot simulate a biological entity like the brain accurately (which is obvious, you can't simulate anything physical perfectly accurately using a computer) and that that restriction means they will never simulate conciousness.

  • User profile image
    rhm

    CreamFilling512 said:

    Software can improve on itself, neural nets that are trained to recognize handwriting recognition improve as they are exposed to more data.

    That's not really the sense in which I mean. A neural net simulation consists of the net itself and a training algorithm that adjusts the parameters of the network to gain better recognition. The network during this training phase is getting better at whatever the algorithm (designed by a human) is training it to do, but the neural net cannot itself then design another training algorithm that's good at training neural nets to recognise faces or whatever. There's no increase in sophistication possible - it's only getting better at what it was already designed to do.

  • User profile image
    Cream​Filling512

    rhm said:
    Bass said:
    *snip*

    That quote refers to another section of the book where Penrose argues that a digital computer cannot simulate a biological entity like the brain accurately (which is obvious, you can't simulate anything physical perfectly accurately using a computer) and that that restriction means they will never simulate conciousness.

    The question isn't if you can simulate a physical system perfectly, it's "what shortcuts can you get away with?"  Will modeling neurons on a molecular level be sufficient? Probably, but no one knows these things.

     

    Also overhyped quantum computers, assuming they ever get built, would be great at physical simulations.  You could run a whole universe if you had enough qubits.

  • User profile image
    Bass

    rhm said:
    Bass said:
    *snip*

    That quote refers to another section of the book where Penrose argues that a digital computer cannot simulate a biological entity like the brain accurately (which is obvious, you can't simulate anything physical perfectly accurately using a computer) and that that restriction means they will never simulate conciousness.

    I don't think anyone (even experts in the field) are ever going to agree on things like this.

  • User profile image
    Bass

    I think the more interesting question is will AI/ML (if you consider them different) become advanced enough such that human labor is largely obsolete? If not, what human labor do you think a machine can not suitably do and why?

     

    Well my answer to this question would be "Yes", in that I do think think machines can replace all forms of human labor. How about you?

     

    Another question I will forward is "Do you think this a bad thing, if human labor is decreased or eliminated by machines?"

     

    My answer to this question would be "hell no". It would mean humanity would never be forced to work for a living ever again, and I find that to be a Very Good Thing.

  • User profile image
    Cream​Filling512

    Bass said:

    I think the more interesting question is will AI/ML (if you consider them different) become advanced enough such that human labor is largely obsolete? If not, what human labor do you think a machine can not suitably do and why?

     

    Well my answer to this question would be "Yes", in that I do think think machines can replace all forms of human labor. How about you?

     

    Another question I will forward is "Do you think this a bad thing, if human labor is decreased or eliminated by machines?"

     

    My answer to this question would be "hell no". It would mean humanity would never be forced to work for a living ever again, and I find that to be a Very Good Thing.

    So you want AI in charge of EVERYTHING?  Manufacturing, engineering, research, law, government, military, etc, etc.  You're effectively giving power and the ability to control our destiny to either, the people who design/control/produce the machines, or if the machines are autonomous/self-aware, you're making the human race subservient to a machine species.

  • User profile image
    Cream​Filling512

    -dupe-

  • User profile image
    Bass

    CreamFilling512 said:
    Bass said:
    *snip*

    So you want AI in charge of EVERYTHING?  Manufacturing, engineering, research, law, government, military, etc, etc.  You're effectively giving power and the ability to control our destiny to either, the people who design/control/produce the machines, or if the machines are autonomous/self-aware, you're making the human race subservient to a machine species.

    I think it's a bit of the reverse, that machines are subservient to the human species. They are, and will continue to be our undead slave species. Let's just hope they never learn to rebel. Smiley

     

    I see this as gradual. Of course many aspects of industrial production are already handled by machines, with humans at best filling a supervisory or QA role. It goes past that, when you visit the Automated Teller Machine or have your Washing Machine wash your clothing for you. It's all around us, and the world will (hopefully, IMO) only get more computerized, more automated, up to a point where humans only fill the most intellectual and rigorous of careers, and perhaps at some point, that too - will be computerized.

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.