Stephen Wolfram on Language, Computation, Reality and ChatGPT

in reality •  last year 

Good to hear Wolfram put language-based AIs such as ChatGPT in some context; they are large fill-in-the-blanks algorithms using neural nets to generate essentially Aristotelian language-constructs. This is in contrast to Wolfram Language, which is designed as a symbolic computational language.

Language-AIs are just searching for words that "fit" from a huge database, whereas computational-AIs attempt to calculate answers that are then output as language. For Wolfram, ChatGPT is a shallow computation on a huge set, whereas Wolfram Alpha, and now Wolfram Language, aim to perform deep computations that may even lead to new results.

The reason he is adding a ChatGPT plugin is to see whether, with time and feedback, the underlying neural network may reveal linguistic functions that can then be encoded as computational models. I've always found it both alarming and pathetic that ChatGPT can't do even basic mathematics - now it can, by using the Mathematica engine within Wolfram.

The challenge here is whether it is even possible to create computational systems that can answer essentially human questions about life, the universe and what's for dinner. Something Wolfram discovered is that some very simple programs can do very complex things. When cellular automata programs first appeared, one could run one's own experiments to show this. This is also where the "butterfly effect" comes from, which is sometimes true but often not. Even the complexity of fractals comes from a single line of computation. What Wolfram doesn't really spell out - or not until much later in the interview - is that this complex behaviour arises from feedback loops within the algorithms. Such computations are recursive, so that each output becomes the next input and so on. Even ChatGPT uses recursion and feedback; Wolfram cites amusing examples where the AI realises it has said something wrong - but it can only analyse that after it has done so. What is truly dangerous is that in doing so it can build up a totally wrong picture of the world, based upon human stupidity and manipulation - any insights buried in the language will have such low probabilities as to be largely ignored.

What you then find is that some of these computations are irreducible, meaning that there is no way to get a final answer apart from going through the whole sequence of steps. Others, known as reducible computations, can be reduced to some formula that can jump ahead to the answer; and it can do so because the computation becomes predictable. Most complex behaviour is not predictable, even if it is perfectly deterministic. The two concepts, once thought to be synonymous, are quite different.

One simple example from school algebra. Every student learns that there is a formula to solve every quadratic equation; learn the formula, plug in the parameters and you can easily calculate the solutions. Now, some courses will also teach how to solve quadratics using recursive methods, so that you rearrange the equation into a recursive form, start with a first guess and calculate a sequence of outputs that serve as the next input. Most of the time, these will converge to one of the roots (but not the other), while sometimes (if you're unlucky) they will diverge towards infinity.

In this case, why bother using the recursive method when we have a simple formula? The reason is just to teach students how recursion works, and when it doesn't, but here it illustrates the difference between the two methods, and to highlight that if a computation is irreducible it means there is only the recursive formula, with no short-cut algebraic solution.

One amusing thing is that even the humble quadratic hides a parallel universe of fractals in those zones where the recursion never converges, nor diverges, but appears to jump around. I discovered this while studying calculus, before I ever read Mandelbrot's book, so such ideas are somehow "around" once discovered, or more simply, the tech is there to make it easier to explore, such as a graphic calculator. Was also funny seeing a Mandelbrot set come out of a dot-matrix printer! Looked a squashed spider. But I digress.

The future may well be deterministic, in that recursive calculations are deterministic, but it is also unpredictable. We cannot tell the difference between free will and the illusion of free will; and what does the word "free" add to having a "will" anyway?

The interesting thing regarding human knowledge and computational theory is that our minds, and our sciences, focus on those islands of predictability in order to make some sense of the world. In an ocean of complexity, it is not yet certain we have found all those islands of relative stability. An even deeper problem is whether the knowledge we think we know is true at all. The danger regarding AI is that they become merely more public propaganda generators. Wolfram's knowledge-engine may be a much-needed antidote to the AI-parrots. We shall see if they ever clash in public.

Wolfram has been doing this work for a long time; he knows the dangers, the pitfalls, but also the promise of uncovering new knowledge. This is a very different kind of science to the 19th century, that got bogged down in describing complex systems with ever-more complex equations, that were then largely unsolvable apart from those pockets that could be simplified. Lex Fridman does get some good guests, but I'm not a fan of his transhumanist agenda - he seems to think that a machine-Lex will somehow be a better Lex than a human-Lex. I just wish he'd go ahead and try it - then let us know... or not.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE BLURT!
Sort Order:  
  ·  last year  ·  

Try shaking the hand of your mirror-image.

  ·  last year  ·  

This also interesting

  ·  last year  ·  

and this


all 3 are recent interviews and somewhat overlap, although sometimes the language used may differ.

  ·  last year  ·  

OpenAI CEO to testify before Congress alongside ‘AI pause’ advocate and IBM exec

AI experts from IBM, NYU and OpenAI will testify before the U.S. Senate on May 16 in a hearing entitled “Oversight of A.I.: Rules for Artificial Intelligence.”

will be live video = https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence

  ·  last year  ·  

Congratulations, your post has been curated by @dsc-r2cornell. You can use the tag #R2cornell. Also, find us on Discord

Manually curated by Blessed-girl

logo3 Discord.png

Felicitaciones, su publicación ha sido votada por @ dsc-r2cornell. Puedes usar el tag #R2cornell. También, nos puedes encontrar en Discord