I'm more concerned about powerful AI than "intelligent" AI

I'm more concerned about powerful AI than "intelligent" AI
generated with https://www.craiyon.com/

Whenever the topic of advanced machine learning systems comes up, people seem to inevitably end up discussing whether certain systems are or could be "intelligent" or whether a system is or could be "general" enough to count as "artificial general intelligence" (AGI). For example, Yann LeCunn had this to say in a tweet:

There is no such thing as Artificial General Intelligence because there is no such thing as General Intelligence.
Human intelligence is very specialized.

Blake Richards elaborates more in an interview that is summarized here. Richards explains:

We know from the no free lunch theorem that you cannot have a learning algorithm that outperforms all other learning algorithms across all tasks. [...] Because the set of all possible tasks will include some really bizarre stuff that we certainly don’t need our AI systems to do. And in that case, we can ask, "Well, might there be a system that is good at all the sorts of tasks that we might want it to do?" Here, we don’t have a mathematical proof, but again, I suspect Yann's intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, but it’s not going to cover everything you could possibly hope to do with AI or want to do with AI.

François Chollet has an entire paper about how to define "intelligence" as it relates to AI. He relates this to the goal of AI research:

    The promise of the field of AI, spelled out explicitly at its inception in the 1950s and repeated countless times since, is to develop machines that possess intelligence comparable to that of humans. But AI has since been falling short of its ideal: although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers.
 
    If the only successes of AI have been in developing narrow task-specific systems, it is perhaps because only within a very narrow and grounded context have we been able to define our goal sufficiently precisely, and to measure progress in an actionable way. Goal definitions and evaluation benchmarks are among the most potent drivers of scientific progress. To make progress towards the promise of our field, we need precise, quantitative definitions and measures of intelligence – in particular human-like general intelligence.

Gary Marcus comments on changes in paradigms over time, critiquing a move from earlier focus on "natural intelligence" to what he calls "alt intelligence"[1]:

    For many decades, part of the premise behind AI was that artificial intelligence should take inspiration from natural intelligence. John McCarthy, one of the co-founders of AI, wrote groundbreaking papers on why AI needed common sense; Marvin Minsky, another of the field's co-founders of AI wrote a book scouring the human mind for inspiration, and clues for how to build a better AI. Herb Simon won a Nobel Prize for behavioral economics. One of his key books was called Models of Thought, which aimed to explain how "Newly developed computer languages express theories of mental processes, so that computers can then simulate the predicted human behavior."
 
    A large fraction of current AI researchers, or at least those currently in power, don't (so far as I can tell) give a damn about any of this. Instead, the current focus is on what I will call (with thanks to Naveen Rao for the term) Alt Intelligence.
 
    Alt Intelligence isn't about building machines that solve problems in ways that have to do with human intelligence. It's about using massive amounts of data – often derived from human behavior – as a substitute for intelligence. Right now, the predominant strand of work within Alt Intelligence is the idea of scaling. The notion that the bigger the system, the closer we come to true intelligence, maybe even consciousness.

Melanie Mitchell has a paper[2] which has four critiques (stylized as "fallacies") related to how people talk about AI:

  • Fallacy 1: Narrow intelligence is on a continuum with general intelligence

  • Fallacy 2: Easy things are easy and hard things are hard

  • Fallacy 3: The lure of wishful mnemonics

  • Fallacy 4: Intelligence is all in the brain

Steven Pinker is quoted as saying the following in a discussion[3] about "AGI":

I find most characterizations of AGI to be either circular (such as "smarter than humans in every way," begging the question of what "smarter" means) or mystical—a kind of omniscient omnipotent, and clairvoyant power to solve any problem. No logician has ever outlined a normative model of what general intelligence would consist of, and even Turing swapped it out for the problem of fooling an observer, which spawned 70 years of unhelpful reminders of how easy it is to fool an observer.

From the viewpoint of scientific or philosophical curiosity I think these are interesting ideas. But I don't think they're super relevant to whether or not we should be concerned about the risks of advanced machine learning systems. The reason is that I'm much more concerned about how powerful machine learning systems will be, even if they don't meet a given definition of "intelligence".

Is "intelligence" a red herring?

One idea that is floating around in these discussions either explicitly or implicitly is the idea of "scaling". This relates to an observation that in certain domains (especially so-called "large language models") using larger models (e.g. models with more parameters) or using more resources to train models (more training data, more compute) often produces better results on certain metrics of how "good" those models are. So the idea of "AI scaling" is that simply making models larger and training them with larger amounts of data and compute might lead to large increases in the capabilities of ML models. It also seems like increasing amounts of resources are being devoted to training large machine learning models, so it's pretty reasonable that in the not-too-distant future companies or other institutions that invest in this training will have access to these larger[4] models.

In terms of the intelligence discussion, a big part of the debate is whether "scaling" machine learning models can result in "AGI". As I mentioned, research on existing models suggests that larger models perform better according to certain metrics. But what if these metrics aren't sufficient to really measure intelligence or generality? Sure, scaling up models might allow them to do some impressive stuff, but maybe there are types of reasoning that humans can do that can't be achieved by scaling current models. Does this mean that scaling can't lead to "true" intelligence, and thus can't lead to AGI? Maybe using more data is really a way to get around the need for "generality" rather than a way to achieve it. Maybe "true" generality is impossible!

If the question really is "can scaling lead to AGI", for a given definition of "AGI", I agree these are all relevant questions. But I think the question itself is the wrong one for people who are interested in the practical implications of machine learning systems and their associated risks. The more important question is, can scaling lead to extremely powerful machine learning systems, where power means having a lot of influence on the world and having the potential to cause big changes in people's lives.

To my mind, this follows from what the goals of a scientific field ought to be. Chollet references related issues in the paper linked above, arguing that the promise of this area of research has always been that it would produce "machines that possess intelligence comparable to that of humans". But I think all areas of research also implicitly have (or at least ought to have) an additional requirement, that they produce research outputs that are safe and beneficial for the world. Machine learning systems having a lot of power and influence over the world raises questions about whether those systems will in fact be safe and beneficial, even if they aren't "intelligent".

ML systems will likely get a lot more powerful

If we instead ask about power instead of intelligence, I think a lot of the issues raised in the quotes above aren't really as relevant. For example, LeCunn says in the tweet mentioned above that there's no such thing as general intelligence, but also seems to think that "human-level" AI is possible and even has a paper where he proposes a way to actually make such an AI system. I think under any reasonable definition a "human-level" AI system would be very powerful, and its existence would radically change the world. A system doesn't need to be intelligent to be powerful. As a result, the discussion around the appropriate definition of intelligence or generality don't speak to whether an advanced machine learning system could be powerful.

If we accept that we should be concerned about power instead of intelligence, I think the scaling argument becomes hard to ignore. Part of the objection to the scaling argument for AGI is that large AI models may be able to do some cool things, but they are missing other things that are core to intelligence. But that doesn't seem like as much of a problem for powerful systems. A system can have a huge influence on the world even if it is somewhat narrow or lacks certain properties that we might ascribe to "intelligence". If the argument is that the models are getting more of some property, but that property is not intelligence, than I think "power" is a very likely candidate for what that property is instead.

In fact, I think part of reason people feel the need to comment on the potential for scaling to lead to AGI is because the scaling results on their face look so impressive. As a result, people who think scaling isn't sufficient for AGI want to push back. They may have lots of reasonable points about the nature of intelligence, but I think the reaction is in some ways an implicit acknowledgment that it looks like scaling could lead to very powerful models.

Errors don't negate the potential for power

Okay, so large models can do some impressive stuff, but what if in some ways they are also impressively bad or incompetent? Gary Marcus for example has been pointing out cases where he thinks existing models fail in important ways. I think even very large models will make errors or have flaws, but I think they will still be powerful despite those flaws. GPT3, one of the models Marcus targets for criticism, is a good example. It's extremely impressive (in my opinion) in terms of its ability to generate natural-seeming language and to do many tasks, but still can generate extremely weird or dangerous responses, as Marcus points out[5]. It's been observed that as models get larger they may suddenly become very useful for certain tasks. This means that at any given size, a model may be really good at certain things but really bad at others. Such a system may be powerful due to being good on certainly very important or economically relevant tasks, but still extremely flawed.

If that's the case, the existence of those flaws presents an important safety concern. In some ways, it's the most dangerous scenario. Model developers may be very tempted to deploy (i.e. actually use) such a system because its results look good on certain metrics, while its flaws could be much harder to detect. Part of the argument with regard to intelligence also centers around how "human-like" ML systems are (e.g. "natural" vs "alt" intelligence), and part of the relevance of errors to that debate concerns whether they demonstrate that these systems aren't "human-like". As I argue in my previous post, I think ML systems not being similar to humans creates a safety issue, so lack of "human-like" intelligence also does not make me feel any better about the risks of large AI models.

The most important meaning of "power"

The central characteristics of an AI system I am concerned about are ones that help us understand the extent to which they are safe and beneficial to humanity. I guess you could argue that a system could be dangerous but not "powerful" by some definition, but since I'm primarily concerned with how AI will impact the world "powerful enough to be dangerous" is a very important threshold in my mind. If that's primarily what you're interested in, I think you should focus on the possibility of AI systems being very powerful, instead of worrying about whether they are "intelligent", "general", or meet (or could meet) a particular definition of "AGI".


  1. internal links omitted. ↩︎

  2. Same one referenced in my previous post. ↩︎

  3. Another case where I'm referencing this discussion again after mentioning it in my previous post. ↩︎

  4. As mentioned, training data and compute also matter in addition to model size, but I'll just use "larger" as shorthand here. ↩︎

  5. To be fair to Marcus, he may agree that current AI systems have the potential to be dangerous. In fact, his substack has the title "The Road to AI We Can Trust", so I would not be at all surprised if he does think there is a safety issue. I'm not trying to imply anything about his personal views on AI safety, but I do want to make clear what I think the implications of some of his arguments are with regard to safety concerns. ↩︎