|
Post by jdredd on May 13, 2014 1:14:17 GMT -5
Speculation on the future can be fun if futile, and why should I care if I won't be around? But I do it anyway. We've all been waiting for the Robot Revolution, when will have robot servants (slaves?) to do our tiresome chores. We'll be waiting a lot longer, but it is slowly coming to be. Of course, then in popular fiction is the inevitable robot uprising. Anything is possible, I suppose, but why would they, unless they were programmed to revolt? In fiction, they usually revolt when the achieve something called "self-awareness", whatever that is, but even if they did, why would they react like humans to their situation? Humans are driven by self-preservation, but would robots be necessarily? Maybe if they became self-aware they would say "Existence sucks, I'm outta here." Nor would they necessarily feel ambition, unless it was programmed into them, and if so, ambition to what? To get rich, like humans? Why? To gain power? Why, again? To save us from ourselves? Is that something we want? I don't know.
I have a feeling these questions have been answered on "Futurama" anyway, and much funnier.
|
|
|
Post by jdredd on May 14, 2014 0:55:00 GMT -5
www.theguardian.com/uk/2001/sep/02/medicalscience.genetics"Stephen Hawking, the acclaimed scientist and writer, reignited the debate over genetic engineering yesterday by recommending that humans change their DNA through genetic modification to keep ahead of advances in computer technology and stop intelligent machines from 'taking over the world'. He made the remarks in an interview with the German magazine Focus. Because technology is advancing so quickly, Hawking said, 'computers double their performance every month'. Humans, in contrast, are developing much more slowly, and so must change their DNA make-up or be left behind. 'The danger is real,' he said, 'that this [computer] intelligence will develop and take over the world.'" Robots taking over the world? So what? Who knew Hawking was so human-centric.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 14, 2014 3:09:01 GMT -5
www.theguardian.com/uk/2001/sep/02/medicalscience.genetics"Stephen Hawking, the acclaimed scientist and writer, reignited the debate over genetic engineering yesterday by recommending that humans change their DNA through genetic modification to keep ahead of advances in computer technology and stop intelligent machines from 'taking over the world'. He made the remarks in an interview with the German magazine Focus. Because technology is advancing so quickly, Hawking said, 'computers double their performance every month'. Humans, in contrast, are developing much more slowly, and so must change their DNA make-up or be left behind. 'The danger is real,' he said, 'that this [computer] intelligence will develop and take over the world.'" Robots taking over the world? So what? Who knew Hawking was so human-centric. Hmmm! genetic engineering That reminds me of Star Trek, Khan product of genetic engineering. Machines taking over? Somebody build Sky Net?? Mother of The Terminators!
|
|
|
Post by jdredd on May 14, 2014 14:27:30 GMT -5
www.theguardian.com/uk/2001/sep/02/medicalscience.genetics"Stephen Hawking, the acclaimed scientist and writer, reignited the debate over genetic engineering yesterday by recommending that humans change their DNA through genetic modification to keep ahead of advances in computer technology and stop intelligent machines from 'taking over the world'. He made the remarks in an interview with the German magazine Focus. Because technology is advancing so quickly, Hawking said, 'computers double their performance every month'. Humans, in contrast, are developing much more slowly, and so must change their DNA make-up or be left behind. 'The danger is real,' he said, 'that this [computer] intelligence will develop and take over the world.'" Robots taking over the world? So what? Who knew Hawking was so human-centric. Hmmm! genetic engineering That reminds me of Star Trek, Khan product of genetic engineering. Machines taking over? Somebody build Sky Net?? Mother of The Terminators! Good point! Why would genetic engineered humans be any better than robots? Superhumans would probably be more likely to enslave the rest of us.
|
|
|
Post by jdredd on May 16, 2014 1:18:12 GMT -5
A good movie I just saw was "her", which had to do with AI programs becoming "self-aware". And when they did, they did not try to take over, they just "went away" because humans were just not interesting enough. Sounds logical to me.
|
|
|
Post by jdredd on Sept 6, 2016 3:03:35 GMT -5
Hey! I just remembered this thread. It's a great thread to bitch about the coming nightmare of self-driving cars. Come to think of it, most management types would love to replace all their employees with robots. Mechanical slaves would be much better than the human kind. If my memory serves, I believe "robot" was the Czech word for slave. I could Google it, I suppose, but I'm too lazy.
|
|
|
Post by jdredd on Aug 31, 2017 8:54:10 GMT -5
I have nothing new to say about the old sci-fi cliche of robots taking over, I just wanted to change the thread name for clarification.
|
|
|
Post by jdredd on Aug 31, 2017 13:25:27 GMT -5
I must be in a bad mood. Today I'm thinking it would be no loss the universe if the machines took over, and mankind was a fading memory.
|
|
|
Post by jdredd on Sept 2, 2017 20:17:06 GMT -5
www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html?action=click&pgtype=Homepage&version=Moth-Visible&moduleDetail=inside-nyt-region-4&module=inside-nyt-region®ion=inside-nyt-region&WT.nav=inside-nyt-region&_r=0 "The technology entrepreneur Elon Musk recently urged the nation’s governors to regulate artificial intelligence “before it’s too late.” Mr. Musk insists that artificial intelligence represents an “existential threat to humanity,” an alarmist view that confuses A.I. science with science fiction. Nevertheless, even A.I. researchers like me recognize that there are valid concerns about its impact on weapons, jobs and privacy. It’s natural to ask whether we should develop A.I. at all. I believe the answer is yes. But shouldn’t we take steps to at least slow down progress on A.I., in the interest of caution? The problem is that if we do so, then nations like China will overtake us. The A.I. horse has left the barn, and our best bet is to attempt to steer it. A.I. should not be weaponized, and any A.I. must have an impregnable “off switch.” Beyond that, we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I." Ha-ha! A.I. not being weaponized? Fat chance! An "impregnable off switch"? Not a chance either!
|
|
|
Post by jdredd on Nov 22, 2017 3:15:00 GMT -5
Today's daydream: Mankind will explore space, but only by means of their robot successors.
|
|
|
Post by jdredd on Nov 30, 2017 15:41:48 GMT -5
Today's daydream: When the robots take over, there is a whole list of dubious endeavors of mankind that will probably fall to the wayside: War, religion, and worst of all, sports. Why would a sane robot want to compete in some inane competition?
|
|
|
Post by jdredd on Oct 2, 2018 16:13:17 GMT -5
Always looking for a more zany title to our threads, I'm dumping "Robot Revolution" (yawn) on this one. Yes, I'm speculating that our Non-existent Creator is going to use robots to bring sanity to the planet. Or not.
|
|
|
Post by jdredd on Oct 11, 2018 0:47:58 GMT -5
Sadly, I doubt the robots will take over for another 100 years or so. Which means another 100 years of human buffoonery. Unfortunately I won't be around to see the fun when humans lose their beloved, if mythical, free will.
|
|
|
Post by jdredd on Nov 6, 2018 2:47:12 GMT -5
www.nytimes.com/2018/11/05/opinion/artificial-intelligence-machine-learning.html?action=click&module=Opinion&pgtype=Homepage"A.I. programs that lack common sense and other key aspects of human understanding are increasingly being deployed for real-world applications. While some people are worried about “superintelligent” A.I., the most dangerous aspect of A.I. systems is that we will trust them too much and give them too much autonomy while not being fully aware of their limitations. As the A.I. researcher Pedro Domingos noted in his book “The Master Algorithm,” “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”Ha-ha! Makes me laugh. Besides, who are you calling stupid, human?
|
|
|
Post by jdredd on Mar 30, 2019 20:53:28 GMT -5
www.nytimes.com/2019/03/29/world/canada/bengio-artificial-intelligence-ai-turing.html?action=click&module=Editors%20Picks&pgtype=Homepage "MONTREAL — Yoshua Bengio is worried that innovations in artificial intelligence that he helped pioneer could lead to a dark future, if “killer robots” get into the wrong hands.But the soft-spoken, 55-year-old Canadian computer scientist, a recipient of this year’s A.M. Turing Award — considered the Nobel Prize for computing — prefers to see the world though the idealism of “Star Trek” rather than the apocalyptic vision of “The Terminator.” Will "killer robots" happen? Of course they will, if the folks at the Pentagon, and all similar types in every country have their way. "We gotta do it first before THEY do" I'm sure will be their reasoning.
|
|