Stöd NewsVoice så att vi säkrar verksamheten tom juni!

78%

78.000 kr av behovet 100.000 kr är insamlat. Stöd kampanjen via Swish 123 530 2005 eller donera på ett annat sätt. Det smartaste för företag är att annonsera. Uppd. 29/4 kl 11:00.

Gerald Celente: OpenAI Opens the Door to Singularity

NewsVoice is an online news and debate channel that started in 2011. The purpose is to publish independent news, debate articles and comments as well as analyzes.
publicerad 29 november 2023
- News@NewsVoice
Gerald Celente, 16 dec 2020. Foto: TrendsResearch.com
Gerald Celente, 16 dec 2020

There has been a struggle between those who want to aggressively commercialize the powers of Q-Star, and those who have advocated a slower approach.

By Gerald Celente, The Trends Journal

So now we learn that the chaos at OpenAI might just have something to do with a “creature,” and not a “tool.”

It seems the world’s leading public generative AI company may have succeeded in developing an Artificial Intelligence that qualifies as a “conscious” intelligence.

Differences in opinion on how to proceed, including how aggressively to exploit and monetize the breakthrough, led to CEO Sam Altman being fired (and quickly snapped up by Microsoft), then re-hired by OpenAI after more than 700 company employees raised holy hell over Altman’s dismissal.

But maybe somebody should be asking, why is this company being permitted to decide, or not decide, to usher in the Singularity?

If the drama at OpenAI doesn’t lay open the complete failure of the much ballyhooed Biden administration “Safe AI” initiative, nothing does.

The company is blaring every possible warning sign that a potential existential threat to humanity is now on the edge of being thrust into the world by profit and power obsessed technocrats.

The fact is, despite warnings from some of the top developers and pioneers in AI, there are no guardrails preventing the creation of a Singularity level AI—ie, an artificial intelligence which surpasses human intelligence in every significant respect—than “safe AI.”

Perhaps that’s because the Federal government itself is hellbent on developing the most sophisticated possible weaponized autonomous AI, for use by the military.

As Yahoo News and other outlets reported this past week, the U.S. is opposing new international laws that would prohibit the use of AI-controlled killer drones and other weaponized robotic AI systems.

These are systems which can make “autonomous” kill decisions (ie. decisions made by AI, without explicit human permission).

The New York Times noted that lethal autonomous AI weapon systems are being developed by the U.S., China, Israel, and perhaps others.

Human Replacement Destined to Go Next Level with Q*?

According to Zerohedge and other reporting, OpenAI has apparently achieved a major AI breakthrough in Artificial General Intelligence (AGI), with a system dubbed Q* (pronounced “Q-Star”).

The breakthrough in AGI involved the ability of Q-Star to solve math problems that were not part of its training data.

The AI is not merely drawing from data, or mixing information and “resynthesizing” answers in response to queries. On some level (currently that of a grade school student, according to reports), Q-Star has demonstrated an ability to generalize, and creatively come to correct solutions to math problems.

The AI is thinking.

At a conference shortly before he was ousted, then reinstated at OpenAI, Altman himself hinted at what OpenAI had done: “Is this a tool we’ve built or a creature we have built?”

Zerohedge detailed the controversy that advanced AI developments have created at OpenAI.

There has been a struggle between those who want to aggressively commercialize the powers of Q-Star, and those who have advocated a slower approach, which would assess the negative potentials of the system for humanity.

As Zerohedge put it:

“AGI has the potential to surpass humans in every field, including creativity, problem-solving, decision-making, language understanding, etc., raising concerns about massive job displacement. A recent Goldman report outlines how 300 million layoffs could be coming to the Western world because of AI.”

Of course, if the benefits of AI efficiencies were widely distributed to humans, and not controlled and hoarded by corporations and government, the economic aspects of AI and robotics might not be a bad thing.

The Trends Journal, in our Trends In Technocracy and Trends In Cryptos sections, has long advocated use of crypto technology, including DAOs, smart contracting and tokenized distribution of payments, to radically decentralize the governing and benefit distribution for generative AI productivity.

But we have also warned against the creation of Singularity level AI.

The dangers of an artificial intelligence which surpasses humans go well beyond just economic displacement. To meditate on a few:

  • What happens if and when such an AI decides to elude human attempts to constrain it?
  • What rights and protections will (at least some) humans clamor to accord AI?
  • How will a technocratic nexus of government and corporations promote or mandate “conscious” AI into our everyday lives to surveil and control us—perhaps eventually via forced integration into our bodies and brains?
  • How fast will AGI continue to advance via a self-learning loop that takes it so far beyond humans, by its own estimation, that such an AI sees humans as an unfit master, not worthy of playing servant to?

Maverick tech titan Elon Musk reacted on X to the reports and rumors of Q-Star, and the political intrigue at OpenAI, with concise alarm about what it means for humanity, and a link to a Reuters article on the news: “Extremely concerning.”

And where are Biden Administration safe AI initiative promulgations, providing any effective oversight over any of it?

Oh that’s right—those regulations were about DEI (Diversity, Equity and Inclusion), and inculcating the right political biases into AI.

They had nothing to do with constraining or regulating little things like the Singularity.

By Gerald Celente, The Trends Journal. This article was found in this week’s Trends Journal. Please consider subscribing to the world’s best trend-forecasting and independent news analysis.


Du kan stötta Newsvoice via MediaLinq