Mustafa Suleyman

From Wikiquote
Jump to navigation Jump to search
Mustafa Suleyman

Mustafa Suleyman (born August 1984) is a British artificial intelligence (AI) entrepreneur. He is the CEO of Microsoft AI, and the co-founder and former head of applied AI at w:DeepMind!DeepMind, an AI company acquired by Google. After leaving DeepMind, he co-founded Inflection AI, a machine learning and generative AI company, in 2022.

Quotes of Mustafa Suleyman[edit]

  • I think in the long-term — over many decades — we have to think very hard about how we integrate these tools, because left completely to the market and to their own devices, these are fundamentally labor-replacing tools. They will augment us and make us smarter and more productive, for the next couple decades, but in the longer term that's an open question. They allow us to do new things that software has never been able to do before. These tools are creative, they're empathetic, and they actually act much more like humans than a traditional relational database where you only get out what you put in.
  • After DeepMind I never had to work again. I certainly didn’t have to write a book or anything like that. Money has never ever been the motivation. It’s always, you know, just been a side effect. For me, the goal has never been anything but how to do good in the world and how to move the world forward in a healthy, satisfying way. Even back in 2009, when I started looking at getting into technology, I could see that AI represented a fair and accurate way to deliver services in the world.
  • I think that we are obsessed with whether you’re an optimist or whether you’re a pessimist. This is a completely biased way of looking at things. I don’t want to be either. I want to coldly stare in the face of the benefits and the threats. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable.
So two years ago, the conversation—wrongly, I thought at the time—was “Oh, they’re just going to produce toxic, regurgitated, biased, racist screeds.” I was like, this is a snapshot in time. I think that what people lose sight of is the progression year after year, and the trajectory of that progression.
Now we have models like Pi, for example, which are unbelievably controllable. You can’t get Pi to produce racist, homophobic, sexist—any kind of toxic stuff. You can’t get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbor’s window. You can’t do it—
  • What I’ve always tried to do is attach the idea of ethics and safety to AGI. I wrote our business plan in 2010, and the front page had the mission ‘to build artificial general intelligence, safely and ethically for the benefit of everyone’. I think it has really shaped how a lot of the other AI labs formed. OpenAI [the creator of ChatGPT] started as a nonprofit largely because of a reaction to us having set that standard.
  • I think this idea that we need to dismantle the state, we need to have maximum freedom – that’s really dangerous. On the other hand, I’m obviously very aware of the danger of centralised authoritarianism and, you know, even in its minuscule forms like nimbyism. That’s why, in the book, we talk about a narrow corridor between the danger of dystopian authoritarianism and this catastrophe caused by openness. That is the big governance challenge of the next century: how to strike that balance.

External links[edit]

Wikipedia
Wikipedia
Wikipedia has an article about: