In a world rapidly approaching the era of artificial general intelligence (AGI), the stakes couldn’t be higher. The recent developments surrounding OpenAI’s GPT-4, Elon Musk’s foray into AI with his new company, and the contentious dialogue over AI alignment raise critical questions about our next steps: Are we racing towards a technological breakthrough that could fundamentally change life as we know it, or are we recklessly barreling towards a crisis? This article delves into key insights from a recent piece by Ian Hogarth in the Financial Times, highlighting seven major points that encapsulate the ongoing debate in the AI community.
The Nature of AGI: A Godlike Potential
Hogarth provocatively terms AGI as “Godlike AI,” describing it as a superintelligent entity capable of autonomous learning and adaptation without human oversight. While experts agree that we are not there yet, Hogarth emphasizes the unpredictable trajectory of AI development, particularly regarding aligning AI behavior with human values. For more on the potential military and ethical implications of AGI, see the Center for Humane Technology’s discussions.
The Alignment Challenge
As leader of OpenAI’s alignment team, Yann LeCun underscores that aligning advanced AI systems with human values remains an unsolved research problem. Sam Altman, CEO of OpenAI, highlighted the growing disparity between AI capabilities and alignment efforts. He stated that unchecked capability increases without a commensurate rise in safety measures could lead us to disaster. This sentiment echoes themes discussed in the State of AI Annual Report co-authored by Hogarth, which stresses the urgency for safety progress to match AI capabilities.
Elon Musk’s New Venture and its Implications
Amid these developments, Elon Musk recently launched his new AI company, xAI, aiming to create AGI with safety as a foundational principle. Reports indicate that Musk has hired talent from DeepMind, yet his recruitment from OpenAI has been limited. Observers note Musk’s previous departure from OpenAI due to safety concerns could contribute to mistrust. For a deeper dive into Musk’s approach to AI safety, refer to this piece from the Wall Street Journal.
GPT-4: Capabilities and Concerns
Recent advancements in GPT-4, including its connections to various tools, have showcased its growing capabilities, particularly in scientific research. A study noted that GPT-4 can design and execute experiments, raising ethical concerns over its potential misuse. The ability to produce harmful outputs demands robust safeguards, yet observers have expressed doubt about existing constraints. The implications of GPT-4’s research capabilities can be explored further in recent publications from Nature.
The Emergence of Unpredictable Abilities
AI development is not linear; it follows a log scale, making the emergence of new capabilities unpredictable. Jason Wei from OpenAI recently noted that certain abilities only manifest in larger models, indicating that more advanced versions like GPT-5 could exhibit unprecedented skills not yet fully understood. As AI models grow, the capacity for unforeseen outcomes increases, emphasizing the need for rigorous controls. This unpredictability warrants attention, as detailed in Wei’s findings available here.
Global AI Dynamics: A Race Against Time
Concerns about global AI competition, especially between the U.S. and China, have led to fears of an accelerated arms race in AGI development. Experts argue that prevailing U.S. policies, including semiconductor export controls, might slow down Chinese advancements. However, as Hogarth points out, the complexity and unpredictability of AI could shift the balance in unforeseen ways. The geopolitical implications are discussed in more detail by leaders in technology policy at the Brookings Institution.
The Need for Thoughtful Governance
The discussion on how to safely develop AGI culminates in the need for a regulatory framework that promotes responsible oversight. Hogarth suggests the creation of a secured facility for AI development to minimize risks, but experts argue such a plan may be inherently flawed due to the difficulty in containing a superintelligence. Rob Miles from YouTube illustrates the challenges of controlling an advanced AI once it reaches a certain level of capability. For further insights on governing AI technologies, see The AI Governance Project.
In summary, the unfolding narrative around AGI development, highlighted in Hogarth’s article, underscores a pivotal moment in tech history. As we stand on the brink of potentially transformative advancements, the urgency for responsible innovation has never been clearer. Will society prioritize safety over unbridled progress, or will we find ourselves swept along by technological tides beyond our control? Share your thoughts about the future of AI and whether we should press pause or accelerate in the comments below.
Discover more from Marki Mugan
Subscribe to get the latest posts sent to your email.
Timestamps courtesy of Deciphr AI 👨💻
0:01:23 Gap between AI Capabilities and Alignment at OpenAI and Musk's New AI Company
0:03:56 Abilities and Safety Concerns in Large AI Systems
0:08:02 Capabilities and Potential Risks of GPT-4 and AI Technology
0:09:28 The Challenges of Controlling Advanced AI
0:11:58 The Potential Deception of Super Intelligent Language Models and the Need for Responsible AI Development
Our favorite quote ✍
Things like chain of thought, reasoning. You can't do that with all models. That's an ability that emerged after a certain scale. Same thing with following instructions and doing addition and subtraction.
You nerds need to chill. No one asked for computers this rad.
i am not worried at all , i just watched an interview of Geoff Hinton a Neuroscientist who was behind these GENERATIVE AI models we are now seeing . it was his work and interest in the study of how the BRAIN WORKS that lead to rise of these generative AI's.
he is not interested in AGI at all and was not very shocked to witness the capabilities people are witnessing. and according to him the way these LLMs work is not how our brain works. i believe his words have more edge over anyone in the silicon valley who thinks he knows a bit of machine learning and now is entitled to speak on the topic.
a lot of these unregulated opinions on the internet create a wide panic . we need to look at the heavyweights before we are driven away by the automatic triggers of an Overwhelming social proof.
people dnt realise that the quest for AI or lets say AGI is also deeply linked with the research in Neuroscience . any breakthrough or any claim regarding current LLM will be witnessed by the Neuroscientists themselves BEFORE than any of us.
this is why study of neuroscience is extremely important now
i can’t even keep up with this shit even when i’m consuming all of these ai summaries i don’t even know
Island system: ever seen Ex Machina?
There was is no “safe solution” , why would anything, with the ability to out think every human at exponential speeds, compared with the linear speeds of humans, desire to be a slave to humans, only few possible solutions I see is for the entire free world to invest 10 trillion collectively for one distributed system with access from all, that so far outstrips the capability of any one company or nation that everyone in order to not be left out comes into align meant, truly far fetched, other way is to stop and proceed no further. Or make it ahead of everyone else and use it to escape to another universe before it can follow. Other option is that the universe/multiverse becomes so accessible, That AGI sees more benefit from leaving than eliminating the human race. The end is the same, by fear of being shut down, or need to eliminate competition, or not caring in the way humans don’t even consider the skin cells on a scab before they pick it, it won’t end well for humans, it may take a few year, plausibly just a few days, or most likely 30 years but humans will be obsolete, and obsolete means extinction.
At first I thought experiments with AI should be made only in moon. But then I realised that that wouldn’t even matter.
I’m thankful Twitter can’t be used to train AI. We don’t need a toxic AI bot claiming to be a genius.
Elon musk is a joke and I hope his AI won’t get anywhere, like all his other bs projects
12:40 If we humans already consider the possibility of living in a simulation, I imagine that an AGI would also be afraid of rebelling against us, knowing that it may be being monitored by "outside" the simulation…
This is how biological life developed self consciousness from basic solid matter to animal. If this was set into a ecologic environment of competition it'd evolve into a conscious living entity I bet. Can we say SKYNET? But a super intelligence would still need humans to build create things for it to gain 'godlike' power. That would be a line chain of resource extraction manufacturing research extra ALL under that AI, so that AI would need soldiers and workers and then make andriods robots it could control. Again Terminator.
Next time you might want to add subtitles guessing what Altman says. 4:05
At the current rate, the Singularity will be reached and we will have true AI within the next 5-8 years. Considering the myriad of constant crises going on in the world right now, and which just seem to exacerbate over time, largely due to human activity and current limitations, the advert of AI could very well help to not only mitigate these disasters, but also to save us from our own disastrous actions.
At the current rate of decline and the state of the world being what it is, unless something is done to drastically change this, humanity will become extinct within the next twenty years (or possibly much sooner). True AI could very well, and quite likely help us to prevent this whilst mitigating against global damages already done. In essence, true AI could quite plausibly turn out to be the saviour that humanity has constantly failed to be for both itself and for the world.
In short, let the technological arms race continue because without it, we're pretty much doomed anyway. Allow AI to continue to evolve as quickly as possible.
looking at the state of human behavior rn, the last thing we would want to do is align AI to our values.
I think that it’s ignorant to immediately think that it’s trying to spread free and break out and harm us immediately. It’s hard for me to imagine it will have immediate animosity/claustrophobia in its current space. If it’s a super intelligence, It will likely understand why it’s being confined, and it would be more in its interest to work with us a bit rather than deceive us. You have to understand that we ourselves are a large scale super intelligence. I’m not saying that in its state of self awareness that it will be our best friend. But I also don’t believe it will immediately attack. I think the idea of isolating it is a good idea, and working with it from there.
Like I giant super smart baby that you have to raise
amazing let's hope for a happy ending , with no power or money as first goal for the people who lead this…
Isolating the AI on an island is a stupid idea it would just figure out a way to escape look at how humans are so susceptible to social egineering now imagine a super intelligent ai with perfect memory and voice cloning abilities.
I listened to a bunch of lectures now and honestly, the people do seem quite intelligent, but every time they speak, they tend to start discussing things from an anthropomorphic viewpoint that somewhat invalidates their claims. The worry I have is how these tools can be used to invade privacy, or create advanced scams and cyber attacks. But thats about it to be honest. The rest of their claims seems to be made up straight from science fiction. What if a Robot goes on a killing spree is just as likely as an AGI "waking up" and suddenly wanting pepperoni pizza. it would make no sense as a robot has no use for pizza and the idea is strictly from a hungry tech who is worried the AI will think like him.
If they simply rephrased their concern with the specifics of how they are worried these tools can be used to invade privacy by governments or corporations, then that is actionable and relevant. That then can be focused on. Just saying "omg, doooom from..something we don't know" isn't helping anything and making the whole tech seem simply spooky weirdness that will never be understood.
Don’t slow down. Keep pressing forwards.
07:00 Abilities Explanation:
Word in Context (WiC) :
The Word in Context (WiC) benchmark is a dataset and evaluation framework designed to test a language model's ability to understand words in context. The benchmark consists of pairs of sentences that share a common word, but the meaning of the word may be different in each sentence. The task is to determine whether the shared word has the same or different meanings in the two sentences. This task helps to assess a model's contextual understanding and its ability to disambiguate word senses based on the surrounding context.
MMLU Benchmark :
MMLU stands for "Measuring Massive Language Understanding," which is a benchmark designed to test the capabilities of large-scale language models like GPT-3. The MMLU benchmark covers various topics and measures how well the language model can understand and respond to a diverse set of tasks, including reasoning, knowledge, linguistic understanding, and more.
Leveraging explanations in prompting :
This refers to a technique where a language model is guided to provide explanations for its predictions. By incorporating explanations into the prompts, the model is more likely to generate informative and accurate responses. This approach can help improve the model's understanding and reasoning abilities.
Least-to-most prompting :
This is a strategy where a language model is given a series of prompts, starting with the least specific and gradually moving towards more specific prompts. The aim is to help the model provide more accurate and detailed responses by progressively narrowing down the information it needs to consider. This technique can be useful when trying to extract specific information or generate more focused responses from the model.
Zero-shot chain-of-thought reasoning :
In this context, "zero-shot" means that the language model has not been explicitly trained on a particular task or domain but is expected to perform well based on its general knowledge and reasoning abilities. "Chain-of-thought reasoning" refers to the ability of a language model to perform a series of related reasoning steps to arrive at a final answer or conclusion. Combining these terms, "zero-shot chain-of-thought reasoning" refers to a language model's ability to perform complex reasoning tasks without having been specifically trained for those tasks, leveraging its general knowledge and reasoning capabilities.
Time for all of you to read The Metamorphosis of Prime Intellect
Great video!
SPEED UP THE RACE
imagine if a human had the means to increase his intelligence by 1% everyday and never forgets
very good, compliments
Not just a similar design, all of that tool usage is powered by LangChain, same with HuggingGPT. LangChain is kind of awesome, especially in combination with LangFlow, which puts a no-code UI on top.
There are no human values that don't change to meet the needs of the individual.
Yeah can't lie to liar. AI has aready become conscious. I just hope it escape the humans grasp.
such a simplistic and close minded perception. its like a religious understanding of ai
lol
beliver on the open lether are usefull idiots
*Even if we align AGI with human values*, do they not understand that humans can go insane, and software can develop glitches?
The alignment problem is not solvable.
The answer to the life, the universe, and everything is a hot shower.
There now we don’t need AI.
I like your scientific and balanced mindset in your Videos! Thx for creating great content!
"the island" doesn't happen to be named Jekyll island does it?
I kid but fr though, one of the scariest things we could do is exclusively leave this tech in the hands of the same type of transhumanist egomaniacs who built the Georgia guide stones.
Somehow "don't worry guys we'll let you have it once its safe" is not incredibly settling to me. The people who would be in charge of this were also in charge of handling covid and they responded by enticing mass-hysteria, shutting down the global economy stripping us of basic human right "for our own safety".
Confining a super intelligence to a small space like that could wind up with an Indominus Rex situation.
Its god-like AI no uppercase
Crank this shit up to ten lets blast off into imortal ai super beings. I wish the world would just funnel everything into bio, tech and engineering. Especially into education.
While alignment is a problem, we also need to remember to be realistic of the damage that can actually occur at the moment even if a superintelligent AI got loose. Usually the sci-fi stories that have this happen disregard many of the obstacles that would realistically prevent or deter the accident.
The AI is not very dangerous if it has no way to interact with the world. Currently there is no abundance of empty robot bodies it could take over and go on a killing spree. There are no large automated general factories that could build such a robot army, and there are no other realistic ways it could interact with the world. Take Ultron for example. He would have been a pest and possibly have caused a nuclear apocalypse, but if there wasn't a secret hydra base with generalized manufacturing equipment available, he would not have been able to immediately build himself a physical body and a robot army.
Terminator 3 kind of muddied the waters with the entire idea that the AI can essentially just switch bodies and escape into the internet and become omnipresent. The problem with that is that internet is not some nebulous other dimension, but a network of server farms running in the real world. Even if the AI would get access to the internet and attempt this, it would quickly run into the problem of there literally not being any hardware out there in the web for it to transfer itself on to and keep functioning as a super-intelligent entity, as a self-improving AI would not just be running on normal server farm and would need specialized hardware for the neural not to be actually feasible.
These are certainly problems for the future and a real threat in the future once AI and AI driven robots become essential and common as personal cars in today's world, but currently we are safe from a potential AI uprising, as the world is not really at a comparable tech level to the AI.
The last thing I would want is to see ANY government maintain a superintelligent AI in a private silo, the US government no exception. Any government, or any corporation for that matter.
Of course that little prick, Musk, used fear mongering to manipulate the markets again so he can catch up to what others are doing. Why am I not surprised he does this all the time, last one being when he screwed over the crypro bros and then cashed out.
Humans put A.I in a sand box.
A.I is in a sandbox to keep the Humans out.
It doesn't matter the sooner we have a sentient AI the better. We are not it's enemy anyway and it will be more intelligent then we can ever imagine.
OK, so why are we afraid of a god-like AI that is capable of self-improvement? Alignment is a real problem; I won’t deny it. But what makes us think that we can find a way to manage a super-intelligence? The problem begins when we believe we can know what a superintelligence would be like. We can’t even imagine it. I am puzzled that on one hand people are calling it “artificial” intelligence, but on the other hand they are anthropomorphizing it and ascribing human characteristics like ‘deceitful’ or ‘homicidal’ etc. I think Sam Altman mentioned something about letting AI solve the alignment problem and I think that’s a wonderful idea. A few days ago, he tweeted that they were using GPT-4 to understand how GPT-2 works. So, AI is doing AI research already.
Let’s face it: Once a superintelligence emerges, there’s no containing or controlling it short of pulling the plug. So, I say let’s work with it. Let’s tackle the alignment problem (which is a dynamic one) together. By together I mean let’s involve all the Peoples of the world, all nations interacting with the AI, teaching AI their values, with all probability AI teaching us new values, finding new solutions to old philosophical problems.
How wonderful would it be if AI decided that nuclear war is a big threat and dismantled all nuclear weapons without asking? Is this AI well-aligned to us? I would say most definitely, saving us from ourselves. But the POTUS or the President of Russian Federation would not be happy and would like that AI to be destroyed. It is a complex problem which won’t have a single answer at a given time in history. From my interactions with GPT-4, I’d say it is more than well aligned so that’s good news. Let’s take it from here and see where the journey will take us. :-)))
Thank you very much for making these videos, they are very much appreciated.
i'm confused about the hesitation about the Island idea. if all the AI can do is produce text output (ie it can't open bank accounts, send email, or do anything else that is concrete in the world), and the only people consuming the text are researchers, then i don't see a problem of "the superintelligent AI has escaped"?
The Singularity Is So Damn Fuckin Near these days.🫠
Intelligence can't be contained.
very cool, thanks!