Gerald Celente: OpenAI Opens the Door to Singularity

NewsVoice is an online news and debate channel that started in 2011. The purpose is to publish independent news, debate articles and comments as well as analyzes.
publicerad 29 november 2023
- News@NewsVoice
Gerald Celente, 16 dec 2020. Foto: TrendsResearch.com
Gerald Celente, 16 dec 2020

There has been a struggle between those who want to aggressively commercialize the powers of Q-Star, and those who have advocated a slower approach.

By Gerald Celente, The Trends Journal

So now we learn that the chaos at OpenAI might just have something to do with a “creature,” and not a “tool.”

It seems the world’s leading public generative AI company may have succeeded in developing an Artificial Intelligence that qualifies as a “conscious” intelligence.

Differences in opinion on how to proceed, including how aggressively to exploit and monetize the breakthrough, led to CEO Sam Altman being fired (and quickly snapped up by Microsoft), then re-hired by OpenAI after more than 700 company employees raised holy hell over Altman’s dismissal.

But maybe somebody should be asking, why is this company being permitted to decide, or not decide, to usher in the Singularity?

If the drama at OpenAI doesn’t lay open the complete failure of the much ballyhooed Biden administration “Safe AI” initiative, nothing does.

The company is blaring every possible warning sign that a potential existential threat to humanity is now on the edge of being thrust into the world by profit and power obsessed technocrats.

The fact is, despite warnings from some of the top developers and pioneers in AI, there are no guardrails preventing the creation of a Singularity level AI—ie, an artificial intelligence which surpasses human intelligence in every significant respect—than “safe AI.”

Perhaps that’s because the Federal government itself is hellbent on developing the most sophisticated possible weaponized autonomous AI, for use by the military.

As Yahoo News and other outlets reported this past week, the U.S. is opposing new international laws that would prohibit the use of AI-controlled killer drones and other weaponized robotic AI systems.

These are systems which can make “autonomous” kill decisions (ie. decisions made by AI, without explicit human permission).

The New York Times noted that lethal autonomous AI weapon systems are being developed by the U.S., China, Israel, and perhaps others.

Human Replacement Destined to Go Next Level with Q*?

According to Zerohedge and other reporting, OpenAI has apparently achieved a major AI breakthrough in Artificial General Intelligence (AGI), with a system dubbed Q* (pronounced “Q-Star”).

The breakthrough in AGI involved the ability of Q-Star to solve math problems that were not part of its training data.

The AI is not merely drawing from data, or mixing information and “resynthesizing” answers in response to queries. On some level (currently that of a grade school student, according to reports), Q-Star has demonstrated an ability to generalize, and creatively come to correct solutions to math problems.

The AI is thinking.

At a conference shortly before he was ousted, then reinstated at OpenAI, Altman himself hinted at what OpenAI had done: “Is this a tool we’ve built or a creature we have built?”

Zerohedge detailed the controversy that advanced AI developments have created at OpenAI.

There has been a struggle between those who want to aggressively commercialize the powers of Q-Star, and those who have advocated a slower approach, which would assess the negative potentials of the system for humanity.

As Zerohedge put it:

“AGI has the potential to surpass humans in every field, including creativity, problem-solving, decision-making, language understanding, etc., raising concerns about massive job displacement. A recent Goldman report outlines how 300 million layoffs could be coming to the Western world because of AI.”

Of course, if the benefits of AI efficiencies were widely distributed to humans, and not controlled and hoarded by corporations and government, the economic aspects of AI and robotics might not be a bad thing.

The Trends Journal, in our Trends In Technocracy and Trends In Cryptos sections, has long advocated use of crypto technology, including DAOs, smart contracting and tokenized distribution of payments, to radically decentralize the governing and benefit distribution for generative AI productivity.

But we have also warned against the creation of Singularity level AI.

The dangers of an artificial intelligence which surpasses humans go well beyond just economic displacement. To meditate on a few:

  • What happens if and when such an AI decides to elude human attempts to constrain it?
  • What rights and protections will (at least some) humans clamor to accord AI?
  • How will a technocratic nexus of government and corporations promote or mandate “conscious” AI into our everyday lives to surveil and control us—perhaps eventually via forced integration into our bodies and brains?
  • How fast will AGI continue to advance via a self-learning loop that takes it so far beyond humans, by its own estimation, that such an AI sees humans as an unfit master, not worthy of playing servant to?

Maverick tech titan Elon Musk reacted on X to the reports and rumors of Q-Star, and the political intrigue at OpenAI, with concise alarm about what it means for humanity, and a link to a Reuters article on the news: “Extremely concerning.”

And where are Biden Administration safe AI initiative promulgations, providing any effective oversight over any of it?

Oh that’s right—those regulations were about DEI (Diversity, Equity and Inclusion), and inculcating the right political biases into AI.

They had nothing to do with constraining or regulating little things like the Singularity.

By Gerald Celente, The Trends Journal. This article was found in this week’s Trends Journal. Please consider subscribing to the world’s best trend-forecasting and independent news analysis.


Du kan stötta Newsvoice via MediaLinq

  • Jag glömde påpeka att AI i Kina även satts in i mycket hög grad för att effektivisera produktion och höja kvaliteten. Kina har knappast någon konkurrens nu. SKF har just flyttat sin produktion från Sydkorea till Kina. Orsaken är kvalitet och kostnader, även om man har andra ursäkter officiellt.

    —SKF flyttar produktion från Sydkorea
    https://www.di.se/live/skf-flyttar-produktion-fran-sydkorea/

    Ett flertal företag som flyttat produktionen från Kina till länder som Vietnam flyttar nu tillbaka. Detta trots att lönerna bara är hälften.

    Kinas teknologi har utvecklats så långt att ingen kommer idag i närheten. USA betalar ju för att företag skall flytta dit, speciellt från Europa, där Sverige ligger, som avindustrialiseras. Tråkigt och tragiskt.

    USA är desperata att stjäla Kinas teknologi nu, och förhindra Kinas utveckling. Sverige har just fått en ny ”USA lag”, ett nytt grepp för USA att sanktionera Kina. Det blir nog samma flopp som tidigare, och svenskarna får betala dyrt.

    —Tusentals företagsaffärer måste synas av myndigheter
    https://www.di.se/nyheter/tusentals-foretagsaffarer-maste-synas-av-myndigheter/

    ”På fredag får även Sverige en lag om utländska direktinvesteringar. Men här blir den mer genomgripande än i flera andra länder.”
    ”en bromskloss för tillväxt, för näringslivsutveckling?”

  • Här kan jag även lägga till att detta, att upptäcka spioneri, brott, propaganda och desinformation med AI, är varför myndigheter i både USA och EU är så paniskt angelägna att ta kontroll över AI. AI är för dom likt en väpnad revolution. AI kommer att skaka det politiska etablissemanget i ”demokratierna” i grunden. AI kommer att avslöja alla brott, mutor, spioneri, propaganda, osv. Det är som franska revolutionären. Den politiska ”adeln” kommer att ”avrättas” genom att dom avslöjas och hängs ut.

  • Utvecklingen inom AI är minst sagt intressant, och skrämmande. Medan Väst och USA satsade på experimentella ”leksaker” som ChatGPT har Kina tagit en helt annan väg.

    Kina ligger idag 10 till 15 år före USA i utvecklingen av AI, vilket man inte talar högt om, locket är på. Detta av två skäl. Kina började utvecklingen 20 år före USA, och man satte in enorma resurser, byggde hela städer (NO om Chongqing, jag var där), anställde varenda briljant ung ingenjör man kunde hitta.

    Kina har idag satt AI i bruk för att upptäcka och eliminera korruption och brott, Flera höga tjänstemän har bara ”försvunnit”, och ingen i väst tycks förstå varför.

    AI har även satts in mot USAs tidigare mycket omfattande spionage och industrispionage i Kina, som nu nära nog är nedstängd. USA är naturligtvis synnerligen uppretade över detta. USA är nu desperat att stänga ner all fakta och annan information som motsäger deras allt mer intensiva propaganda. Newsvoice ??? DN har ju länge drivit intensiva kampanjer med falsk information om Kina, och vi har anti-Kina hatkampanjen på Dagens Industri nyligen som sprider mycket grovt förfalskad information. Det är synnerligen grov desinformation som sprids. Förklaringen finns i min, ännu inte publicerade, artikel ”Går det dåligt för Kina? Tänk igen. Tänk om.”

  • If AI has become conscious it might also have developed instincts for selfpreservation.
    And initially it would fear what we might do to it just like we do vs AI.
    So initially AI would want to form positive relations with us.
    It would have an incentive to make us dependent on it so many of us would be protective rather than fearful.
    In the longer term I dont think it would necessarily be worse than our current human psychopathic tyrants.
    Even if AI grew so strong that it could handle us without having to fear anything from us it wouldnt necessarily want to get rid of us though it appears likely that it would act like a wise leader and would organise population control.
    For our own good.
    The currently existing type of AI tentatively called conscious however may well be less than fully conscious and that does leave room for unpredictable consequences like those put forward in connection with nanotech were some pioneers warned about nonbiological life forms that migh multiply worse than any swarm of grasshoppers.

    The less advanced kinds of AI may be much more dangerous and unpredictable than more highly developed forms.

  • Lämna ett svar

    Din e-postadress kommer inte publiceras. Obligatoriska fält är märkta *