Time publication is addressing an interesting matter regarding an extinction-level threat coming from the AI, according to a new government report. Check out the latest details on this matter below.
AI – a threat to humanity
A recent report commissioned by the U.S. government warns that there are significant national security risks associated with artificial intelligence (AI) that need to be addressed urgently.
The report suggests that if these risks are not addressed quickly and decisively, they could lead to an “extinction-level threat to the human species” in the worst-case scenario. The report emphasizes the need for immediate action to avert these threats.
“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”
AGI (Artificial General Intelligence) is a technology that is currently hypothetical, but its development is being pursued by leading AI labs. The technology is expected to surpass human-level capabilities in most tasks. Some experts predict that AGI could be developed within the next five years.
To research the topic, the three authors of a report spoke to over 200 individuals, including government employees, AI experts, and workers at frontier AI companies such as OpenAI, Google DeepMind, Anthropic, and Meta.
However, some of their findings are concerning. The report suggests that many AI safety workers in cutting-edge labs are worried about the decision-making of company executives, which may be driven by perverse incentives.
The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry.
According to a report, Congress should consider making it illegal to train AI models using computing power beyond a certain level.
The report recommends that a federal AI agency should be established to determine the threshold, which could be set slightly higher than the levels used to train current cutting-edge models, for instance OpenAI’s GPT-4 and Google’s Gemini.
The report also recommends that the new AI agency should require AI companies on the “frontier” of the industry to obtain government permission before training and deploying new models beyond a certain lower threshold.
The report suggests that the authorities should take immediate action to outlaw the publication of the “weights” or internal workings of powerful AI models, such as open-source licenses.
Violations of this law should be punishable by imprisonment. The government should also strengthen its controls on the production and export of AI chips. Also, should direct federal funding towards “alignment” research that focuses on making advanced AI technology safer.
Check out the full piece published by Time in order to learn more.