Getty Images |
The director of the Cyber Security Laboratory at the University of Louisville doth caution that once superintelligence is achieved, controlling AI becomes virtually impossible, stressing that the only prevention is to abstain from its development entirely. Conversely, Elon Musk, the Boy Genius, estimates the perilous odds of calamity at a relatively mere ten to twenty percent, and advocates continued research and development. When even the staunchest proponents admit there is a measure of hazard, it is time to take heed. Or embrace the looming spectre of Armageddon, one or the other.
Copyright 2024, Arthur Newhook. @Sunking278 and @FloydEtcetera on X, and at the same handles on FACEBOOK. MASTODON - @ArthurNewhook@mastodon.world, BLUESKY - @arthurnewhook.bsky.social, and @arthurnewhook on POST. DONATIONS GRATEFULLY ACCEPTED at https://tinyurl.com/ArthurNewhook.
No comments:
Post a Comment