Wednesday, April 3, 2024

An AI safety researcher warns that there is a 99.999999% probability that Artificial Intelligence is an existential threat to humanity

Getty Images

The director of the Cyber Security Laboratory at the University of Louisville doth caution that once superintelligence is achieved, controlling AI becomes virtually impossible, stressing that the only prevention is to abstain from its development entirely. Conversely, Elon Musk, the Boy Genius, estimates the perilous odds of calamity at a relatively mere ten to twenty percent, and advocates continued research and development. When even the staunchest proponents admit there is a measure of hazard, it is time to take heed. Or embrace the looming spectre of Armageddon, one or the other.

Copyright 2024, Arthur Newhook. @Sunking278 and @FloydEtcetera on X, and at the same handles on FACEBOOK. MASTODON - @ArthurNewhook@mastodon.world, BLUESKY - @arthurnewhook.bsky.social, and @arthurnewhook on POST. DONATIONS GRATEFULLY ACCEPTED at https://tinyurl.com/ArthurNewhook.

No comments:

Post a Comment