< return

Knowledge-First AI, GPT3, and More

January 16, 2023
ep
86
with
Christopher Nguyen

description

This week my guest is entrepreneur and technologist, Christopher Nguyen, who–in addition to starting his own companies–has spent four decades working at some of the world’s biggest tech giants.

In this episode we explore many facets of AI, including Christopher’s ideas around knowledge-first AI, bad training data for AI, issues with the black box, GPT3, deepfakes, and much more.

Follow Christopher and his AI work with Aitomatic at twitter.com/pentagoniac

**

Learn more about Singularity: su.org

Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

FBL86-ChristopherNyugen.mp3

Christopher Nguyen [00:00:00] You can have terabytes of data, but it doesn't contain the expertise that that your engineer or even your user has accumulated in the industry over the last 20, 30 years in their brain. So now the first A.I. Is about the combination of human knowledge and data to build better models than predictive models that you could do alone with data. 

Steven Parton [00:00:37] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop on Singularity Radio this week. My guest is entrepreneur and technologist Christopher Nguyen, who, in addition to starting his own companies, has spent four decades working at some of the world's biggest tech giants. In this episode, we focus on exploring the many facets of artificial intelligence, including Christopher's ideas around knowledge. First, AI bad training data for A.I. issues with the black box, GPT three, Deepfakes and a whole lot more. So without further ado, let's just jump into it. Everyone, please welcome to the feedback loop, Christopher Nguyen. So, you know, one of the places that I think would be interesting to start with you as you go through a little tour of your history, because over the past 40 years, I would say you've worked at some of the biggest companies, tech companies specifically on the planet Intel, HP, Xerox and Google, just to name a few. So being so close to these companies that are spearheading innovation in the way that they have. What kind of changes have you seen over the decades in terms of how the tech industry is functioning and how these companies are responding to the changing of the times? 

Christopher Nguyen [00:01:57] Well, actually, one part of my work career, if you will, that I haven't shared, I didn't talk about is that I did work when I was in high school. I spent a year at NASA Ames Research Center. And this is going to date me. But the project I was working on was to identify Constellation with star maps that are very promising for potential for discovery of extrasolar planetary systems. So basically that was before we even saw one. So you can envision maybe the 19th century or something. But there was a time where we didn't see any anything outside our planetary systems other than our own. That was actually my first science and engineering job that we that was the senior at high school. But but since then, I've had the opportunity of participating in and working in so many different ways of technology. And I'm perhaps one of those weird people that that can learn very quickly and adapt to each new way. So today I'm working, of course, on on air and specifically industrial. I think one of the biggest things that has changed you take Silicon Valley as a as a proxy, right? It has shifted all the way from manufacturing, right. Making what we call the the atoms, right. Making transistors and chips and so on, all the way through the software and through consumer and so on. And the interesting thing that's happening now, particularly as it relates to the Singularity radio and Ray Kurzweil and so on, is that we are finally touching human intelligence. Right. And there's something qualitatively different about that. You know, people like to say it's different this time as a joke, but I do feel that it is different this time. We've always augmented ourselves. The glasses that you and I are wearing this augmentation like a Google. You know, we would cease to function if we didn't have access to our Google and our mobile devices. So I think the augmentation of technologies is not new at all, which there was the technology at some point. But I think there's something very qualitatively different, qualitatively different, very powerful and also very disturbing when we think about augmenting our minds, right, with the things that may be possibly smarter than us. 

Steven Parton [00:04:28] Yeah. Well, and recently you founded a company and pardon me if I pronounce this incorrectly, but I think it's automatic or. 

Christopher Nguyen [00:04:35] Absolutely. That's correct. Yes. Yeah, pronounce it. 

Steven Parton [00:04:38] And so it's a focus on what I think you call AI engineering. What is that exactly? And can you kind of take us through a walk or tour of what the company is focusing on? 

Christopher Nguyen [00:04:49] We are focused on something called knowledge first A.I. Right. And that is to distinguish it from knowledge, second, knowledge, third. Otherwise you always have knowledge in AI. But about Silicon Valley for the most part today, and I use the term Silicon Valley here as a as a as a proxy for the general consciousness, the kinds of things that you hear, you know, you read in TechCrunch and Fortune magazine and then Shapers and so on. So, so that that bubble is very much concerned with data. First day, if I can, coined the term meaning you know, in 2012, right? That was when the CAT article came out, you know, and my friends at Google had worked on YouTube videos that saw cat videos and eventually somehow that deep neural network, it would sort of be merged and understanding or at least a at the ability to identify cats versus dogs versus, you know, Britney Spears and so on. And by the way, there's an aside there that they didn't start out quickly. Who worked on that project actually started out trying to identify Jennifer Aniston's stuff. But referring to it as cats is is more user friendly. We since then, the industry has been very much about digital A.I., right? Bits in bits process, bits out. And that turns out to be the easier part. And in retrospect, it makes sense. We, we, we, we work on the low hanging fruit first, but the hard part is where A.I. or machine learning hits the physical industry, which, you know, we're a bit ahead of our being able to upload ourselves. We're still physical beings, right? We still drive cars, we still eat fish. So there's a $25 trillion industry out there. And for the first five or five years, they've been thinking we're doing something wrong because we're not succeeding at this. And by the way, I was in that my my previous company was acquired by Panasonic, which, as you may know, make. All of the Tesla batteries now has a coal supply chain for refrigeration, automotive and so on. And this industry has been struggling with how do we take advantage of this thing? This is to to, to increase revenues, to make better products and so on. And they repeatedly run into walls. And it turns out that wall is because and this is a somewhat surprising but but I would like to say no longer controversial. There's not enough data. Hmm. To train in the same way that we do in in the digital industry, you know, when you're trying to predict clicks of the ads or, you know, your social networks and so on. So in all that first day, it turns out to be a key solution to that, which is to apply human knowledge, domain expertise, things that are outside that dataset. You can have terabytes of data, but it doesn't contain the expertise that that your engineer or even your user has accumulated in the industry over the last 20, 30 years in their brain. So now the first day is about the combination of human knowledge and data to build better models than predictive models than you could do alone with data. And that's that's it. You know, depending on who you talk to, that's intuitively to to be obvious or it's actually quite controversial. But that's what we focus on. 

Steven Parton [00:08:22] And obviously you're going to be bias here. But how is it working? I mean, does it feel like this knowledge first approach is starting to make some inroads in the industry, in the material space, where it where like the data first approaches hadn't in the. 

Christopher Nguyen [00:08:37] Past as a concept, it is completely proven now, I would say at that way, and let me break that down so it doesn't sound like magic when you if you have a Tesla, when you drive a car, you know, today that is self-driving and you know, if you have the latest version, FSD, beta, it is pretty much driving itself, you know, around 90% of the time. There's a lot of human knowledge built into that. A lot of it. Without that, you would not have that. So that's what I mean by as a concept. It is proven right. That's number one. Number two, when you talk to the industrial people, the manufacturers, the wind farm people, the aerospace people and so on, all of that makes a lot of sense to them so that you don't have to sell that idea to them. They say, I always knew this, right? The real challenge is, do you do that sort of manually? Okay. If I buy into something like that, is there an automated way to do that? 

Steven Parton [00:09:42] Right. 

Christopher Nguyen [00:09:43] So over the next five years, you're going to see more and more companies like ours trying to automate that process to make it easier. Right? And you can already sort of see emerging recently, you know, chat, CBT and so on. Those tools are going to be foundational. Models are going to be much more powerful, not as those iteration. Those are those are what I call demos. Right. But but the real thing is going to be the ability to translate automatically for the first time, if you will. Right? Human knowledge, natural language, spoken words, written words and so on into some kind of structured form, which then then be combined with with our very structured machine learning kind of domain. Right. And that's the innovation that's happening today with with these companies, including including ours. 

Steven Parton [00:10:34] Yeah. And you touched on the collaborative aspect there of the human and the technology and and GPT for that matter. And thinking back to Deep blue beating Garry Kasparov in 97 to GPT three, showing breakthroughs right now that are really just blowing people's minds, for lack of a better phrase, the common fear is that in a lot of ways this is starting to make humans obsolete. So as an advocate for this human AI collaboration, how do you address that concern that humans are actually becoming, in fact obsolete? 

Christopher Nguyen [00:11:11] Well, let me first get something out of the way, right? I'm not one of those techno optimists that say, oh, it's automatically going to be, you know, fine and dandy. So there are there are a lot of hidden dangers in A.I., just as there are with any any tool and any powerful tools. Right. The power of a tool is not inherent within the tool, but where it is applied and the scale at which it is applied. Right. We have something, you know, even before the Internet is something where we can first, for the first time, reach 7 billion people. You know, a hundred years ago. You do something. No matter how smart you are, you probably reach 100 people and then 2000 people, 10,000 people and so on. So so the reach alone is is is to be respected. It's not fear. Mm hmm. So having said that, there are intentioned ways that we can say let's let's do it in this direction rather than let it flow in any and, you know, any potentially bad directions. And I think one of the things is what we just touched on just now, we augment. Technology has always augmented our abilities. And so to take what we can see today already, right GPT, lambda various, you know, large language models and and ideas that are multimodal including images and soon sound and, you know, speech and video and so on. It's going to make us much more powerful than, than we were before that, just as any technology has done. And I think if you sort of try to repeat that, maybe you can predict 50 years as Coach Kurzweil has. But but at any given iteration, you can just say, well, I'm going to try to I'm going to try to ride on top of this in some predictable, responsible way. And I think that's that's my optimism. Of course, if you extend that, then it it does get to something which may be a little disturbing, but the future has always been disturbing. So, for example, augmenting our own your own mind, right. Directly with with these models, I think is is is the path of the future, even though it seems very disturbing to some today. But, you know, just as me sitting here in this little box with with with a laptop can be quite disturbing to somebody who who worked in the field a hundred years ago. Yeah. 

Steven Parton [00:13:33] So would you say then, that that brain computer interface approach that you're kind of touching on, there is a natural place will end up if the trajectory continues without too much steering? Or do you think that that's a place that we do have to steer ourselves towards to maybe keep that symbiotic, beneficial collaboration rather than something that's more like an us versus them? 

Christopher Nguyen [00:13:57] Yeah, I think we absolutely need steering. There's sort of a low, low resolution and high resolution steering. I think that is the one of the optima that that we will find ourselves as a human species to one way or another. Mm hmm. But within that path, there are very disturbing ways of that to navigate that, and there are more user friendly ways to navigate that, I think. I think the steering it that high resolution, particularly as it touches the mind, is the human mind, which we know very little about, in which we control ourselves like our actual selves we are I think we need to be very careful about navigating that, but not to say not to do that, I think. Who is that? Kelly of of Wired has a as the term that I really like. He has a book this is what technology wants right. It's think about technology as its own species. I think what technology, of course, is there is a cumulative effect of of human will. But it looks like technology wants to live, proliferate. Technology wants to be integrated, technology want to evolve. And so technology wants that. Well, what are we going to do to control and manage the navigation to that path? 

Steven Parton [00:15:16] Yeah. And before we move away from GPT, just because I think it's such a relevant topic right now, I don't know if this is beyond your scope at all, but because you may be, just explain how it works just a little bit for for our listeners who might not be quite familiar with that and, and maybe you can even add your thoughts about is it as good as we think it is. You know, is is there a lot of hype right now that is unfounded or is this something that's truly opening the door for a revolution in the field? Really? 

Christopher Nguyen [00:15:50] Right. That's an interesting discussion. There are always going to be extremes, right? Mm hmm. And usually you can you can assume that extremes are going to be wrong. But but they help form the boundaries or the brackets for the sort of the real the rails of the discussion. There's extreme that say this thing is, you know, this thing is sentient. It already, you know, it should be treated as a as another organism and then have rights and things that and then the other extreme that says, oh, this doesn't mean anything, you know, it's just a bunch of, you know, bits that move and just repeat what we say. I think both of those extremes are wrong. Mm hmm. As it relates to chat, you more fundamentally the large language models, the foundational models behind it, and, you know, relating to technology generation that I've been through. Another thing that I worked on at Intel was the first flash devices to think about the very first transistors. And now you've got probably, you know, dozens of them on your body. I, I usually very often always know exactly what's happening behind the scenes. Mm hmm. Right. But that does not take away much of the wonderment that I feel. Right. I like to say there's a very different there's a difference between knowing and feeling, right. Both good and bad. Sometimes it takes it to knock my head before I really feel that. Okay, that's not a good idea, even though I could logically work it out. So I think I think Chip and what a lot of people now see Chartbeat is one of those wonder about, you know, exactly what's what's happening behind the scene. What is still awesome. Right. It's like the first moment Steve Jobs, 27, stayed on the stage and started, you know, swiping the screen. And then, then, then and then the thing moves at the speeds up and slows down as if. Wow. Right. So there's this wow moment. And I am one of those people that say, I want to relish this moment. I don't want to dismiss it. And I don't want to be too, too, too fearful with it. So so coming back to to, you know, what it does and and how it does it today, it is possible. And I think most of your audience will already be familiar with this and they haven't. But they definitely should open an account and try to it is possible to go to a chat window with Chargeability. And by the way, open air is not the only company doing this right. So this is just one of the examples. You can intelligently have a dialog with this thing. Right. And, you know, five years ago, five short years ago, we were talking about initiative and creativity. Right. This thing. Appears. And I'm not saying underlying, you know, I'm just describing not not not yet saying what's underneath. It appears to have knowledge about the world. It's not just repeating something. And when as a tool, I'm already using it to generate ideas. You know, maybe I should share this as kind of a temporary advantage when we deal with a new customer, a new use case that I don't know this space really reasonably well. We're already using it to ask what what use cases might might you, hey chat up to or you know, whatever is there. Might we apply this knowledge first day too and it gets pretty reasonable suggestions. Not necessarily correct, but certainly much better than me alone. And then I would take those five ideas that it gives me and I would research and, you know, be very efficient at it. If I had had to jus just just a month ago, I would have to Google and sort of try to learn the area and so on. So, so in other words, is already, you know, I'm going to use this word is already teaching me something, right. At least as a as a friend, right. Maybe not smarter than me or maybe it is smarter than but but it's more than a dictionary is more than Google, Right. It seems to have an understanding and a knowledge of the world. And what's really behind it is no more, but also no less than. Having fed a very basic at the core is basic. I'm not trivial trivializing the engineering prowess that that is worth. But at the core, there's just a learning algorithm, right. Which is, generally speaking, a deep neural network. And that deep neural network simply when it does the computation of the so called inference, you give a bunch of inputs, it multiplies those input by some weights, and then it sums them up, and then it sends those outputs to the next layer and that's it. And then you repeat it, you know, 27 times, 30 times, and so on. When you train it, then you basically say, just try it. And then if you make a mistake, right, if you predict the wrong output, I'll tell you and you do it a billion times and then after a while sort of learns, right. What is really learning is what we call the statistical distribution of the underlying data. I hope that's not too complicated. Complicated sex. The world is not random, right? The fact that I'm looking at you, you're looking at me. We're not a random collection of pixels. If it were, it would be like white noise. Right? So. So the fact that is not random that mean there is a statistical pattern in everything. So when I speak a sentence, there's a there's a sense to it, right? It follows a certain sequence and so on. And so if you look at two or three for a million sentences, you begin to see correlation among these things. So then it emerges that the word good appears in certain places next to other words, and in other sentences and other paragraphs and other articles and other collection of articles in a certain ordered manner. Right. And so it hasn't been really just a matter of scaling that idea up. Right. To terabytes of data. And, you know, of course, that requires a lot of compute to do that. But what happens is that once you do that, there emerges an embedded understanding and I'm going to use that word without much embarrassment to write sufficient sort of clinically. But the the the the the definition of understanding such that it can spit back out not just memorization, but the ability to combine the words I say and the concept that it understand so that it spits out something like, for example, I just gave you the suggestions of use cases that might be of interest to an industry I know very little about. So so that's, that's large language models in in a nutshell. And the thing that I would extend it to is that language models is a key step. But but it's not only text. Right. Right. Very, very soon. But we're already seeing text images being correlated. And that's why you have something like Dolly where you can type in a description and it generates an image. And now Facebook or meta research has has created something we can type in sentences. That is that video, right. These are sort of generative capability. It has to have something where it can generate things. It's not just repeating things. And that's where we are. 

Steven Parton [00:23:21] Are you concerned about things like deepfakes and kind of where this is going? The ability for us to create text, custom text this quickly in videos. This quickly. Yeah. It feels like there's a lot of room for bad actors to have some impact with these tools. 

Christopher Nguyen [00:23:40] Absolutely. If you look at my writings and my tweets, even, you know, up to five or ten years ago, again, this is my non techno optimist credentials. I I've warned about these things and some of my comments. Let me let me give you an analogy to economist. We call it the dismal science. Right? And for the most part, before behavioral economics, economists work in steady states. Right. And kind of like in the limit. Right. In the long run, it should be like this. And then it was Keynes's, I think, Keynes, who said, in the long run, we're all dead. So. So that's that's how I see these disruptive technologies, right? Yeah, of course. In the long run, we will adjust as a species, we will say, well, you know, just like today, when I see you on on video, I don't jump. And I said, Oh my God, who is this tiny guy? Right. But but, but back in, you know, when movies first came out, there's a sort of famous example of people running away as the train approaches under the moving picture. Right. So as a species, we will we will adapt. We will. We will adjust. But there's each time is a dynamic situation and fixed time. And and if. Technology evolves faster than sort of our biological rate of being able to adapt. What does that mean? Right. Are we always going to be falling behind? We're sort of one cycle behind and two cycles and three cycles behind us. So things that I'd do fix, of course. Of course. Today I think it's still fairly easy to to persuade a bunch of people. By the way, here's another thing.  

Steven Parton [00:26:04] Yeah. There seems to be a running trend on the show lately and people I've talked to where that one of the big points made is we're we're judging AI so severely to a standard that goes well beyond what we hold our human counterparts to who are far more capable of destructive behavior right now. And it it's very interesting that AI's progress seems to be surfacing. Our awareness of just how bad some of our human behavior really is. 

Christopher Nguyen [00:26:33] Right. Right. And I'm not saying that to excuse I I'm saying that to be even more careful that, as we say, I must be better that that we are actually not worse. Right. 

Steven Parton [00:26:47] Well, and to that note, I believe you've talked in the past about the blind spots that Silicon Valley has and and some of the ethical considerations around these tools. Could you talk about that a little bit? 

Christopher Nguyen [00:27:01] Well, I like to talk not a first order of things because people have already heard about that before, but maybe there are second order things that are of concern. Right. One of them is the belief by a large school of thought. And I'm not saying good versus evil here. You know, it's kind of like the separation between intent and impact. You don't mean bad, but your impact can be very bad. And so you need to recognize that one of them is I'm just here to build the best model I can. The rest is someone else's problem, right? Don't expect me to to to protect the world from this perfect model that I'm building. You know, you do that afterwards or before or whatever, right? You prepare the data and you decide to do with the output. My job is to build the purest model possible. I think that's number one. That's a fallacy. When you build a model, there are biases that that you put into it and you use the word bias, not just in the colloquial sense of saying bias is bad. But, you know, I think most people will know that machine learning would not work without bias driver bias. Is this what we call the DC level in literature hearing? Right? There has to be a bias for things to work. So that's number one. Number two is that. You can you can look at things as a system view or you can look at things as a component view. Right. It comes back to this this sense of saying, you know, I'm only responsible for this part and the rest is someone else's problem. I think it's okay to say I'm responsible for this part, but I think the next sentence has to be also very concerned with how this part is built and how it's being used. Right. I sometimes I use the example of a part of our society to say, you know, guns don't kill people or people kill people. Only the last party is responsible. I don't subscribe to that to that view. And I think that's that's the that's perhaps a reminder to to myself and to my buddies over around Silicon Valley of of that responsibility party, particularly because when when I was a kid, I, you know, I mean, as a as an engineer kid. Right. My technology you know, I thought about Flash. Right. And I also worked on Unicode. It probably reached 10,000 people. But now when you when you write some bits of code, some of us have the ability to reach a billion people. So, yeah, it's the same code is the same model, but you have vastly greater power now and could be more responsible. Well, when. 

Steven Parton [00:29:45] We opened, we're talking a little bit about how maybe Silicon Valley and technology and the industry has changed. And one thing I know as a computer scientist myself in from talking to people, it seems there's more movement away from a well-rounded humanities based education around this to something that's more like boot camps and code camps. And that makes me wonder to what you were talking about there. It's a bit concerning, perhaps, maybe that the people who are creating code that reaches a billion people don't have a humanities education, aren't thinking about philosophy, aren't thinking about these deeper issues. Is that something you're seeing? Maybe. And the younger employees at these companies are a concern maybe that you have about where we're going forward with such an emphasis on STEM, with how any awareness of the other side of things? 

Christopher Nguyen [00:30:36] Well, let me share part of my credentials here before I speak is that I'm involved in the creation and the launching of a new university in Vietnam called Fulbright University. Vietnam. It's a part of the Fulbright program out of the Kennedy School and so on. And interestingly, and the thing that I like about it is that it is not an engineering university. It is a liberal arts program, liberal arts with strong science and engineering foundations. And if you look at the you know, that institution alone, whether you like it or not, you know it's going to be true that over the next ten, 20 years, the graduates of that institution are going to be leaders of society, leaders of business and industry and so on. Forget about government. Right. But but that's that's the kind of liberal arts plus, you know, strongly founded foundation strong foundations in STEM can produce. But I have to to to confess to something that I as it has puzzled me like how do you how do you articulate the value of this other than, of course, we all feel good. We say, you know, you've got to be well-rounded and have the humanities and so on. But I listen to a podcast recently and I think there's a professor at UCLA. I think her name is Kate Hales, and she gave a really good pitch that I totally buy and that I'm going to use it. And she is a she she's a Caltech graduate, I think, in chemistry and physics or something about undergraduate. But then her Ph.D. is in literature as a professor of literature, and she explains the value of literature this way, which is. And that really resonated with me. She says, the ability, you know, what literature gives you is the ability to step into the mind of other people and other cultures and experience things that otherwise would be unreachable to you and thereby become a better person. Right. And I absolutely believe and indeed I've lived in many countries, I was a refugee when I was a child. So I've dealt with pirates, believe it or not, and then prime ministers. And I think I think that diversity that that has has concretely helped me. Right. And in machine learning, we also see the same thing. Right. Data variance is a good thing so as not to overstep. So anyway, that's that's a long way of saying I was very excited to hear a really good way to sell, you know, liberal arts education. 

Steven Parton [00:33:17] Well, I mean, to your point, I mean, the brain is potentially the best machine learning algorithm, the best inference engine that that's out there. So, I mean, if having a more well-rounded. Education increases your model of the university. I would expect that you would become a better engineer because you would be able to use that model. 

Christopher Nguyen [00:33:37] Anybody who is multilingual knows that learning the next language makes you better at the languages you already know. 

Steven Parton [00:33:45] Yeah. 

Christopher Nguyen [00:33:46] We're seeing that in large language models as well. 

Steven Parton [00:33:49] Well, yeah, but there's an interesting irony here though, too, as I'm thinking about this, though, that some of our most advanced machine learning, some tools like GPT three, if I'm correct me if I'm wrong here, but I mean this is all still very much black box A.I., right? So regardless of what kind of background that model of the world we have, we aren't really aware or able to shape too much what's going on inside that black box. I mean, would you agree with that? 

Christopher Nguyen [00:34:22] Well, I agree with it in in in the large in the sense of where we are. Right. In other words, there's there's a whole discipline emerging called alignment. Right. Which is, you know, you go back to 2001. And so how do you align your robot or your your your your sentient being that you've just created with your intent? Right. How do you align your own child with with your values, for example. Right. But, but at least as biological systems, we've sort of learned the art of teaching. Right. And then hopefully we get what we want. So I think alignment is is so important, right? Because if we just build the thing and then let it run wild, then it will go in some random direction. They will not emerge a kind of, you know, ethical concerns. And, you know, granted that ethics itself is is cultural, right? But it isn't mean. There's nothing just because something is multiple doesn't mean there's no direction to it. So so the field of alignment is an emerging one. I don't know of a university class that talks, you know, that teaches sort of alignment but but folks that opened a who who are sort of trying their best right. Looking around and fact checking betwee is not a model advanced metric. The underlying model is still GPT. I think 3.5 nouns internally numbered that. But but the reason that they're able to release it is that there's been a lot of alignment work done on the underlying model is that okay, now this thing is going to go out there and what was that? What was that Microsoft thing that Microsoft released today or something that, you know, got racist after after a few days. 

Steven Parton [00:36:05] A few days on Twitter? 

Christopher Nguyen [00:36:06] Right. So so alignment is very important. And here's my optimistic take on it, which is well, first of all, how do you think alignment work is being done? It's certainly not being done by purely feeding it as quote unquote, the right data and adjusting the weights. Right, because as these models become more capable of adjusting their weights from the conversations with you and me. The alignment work is being done by telling it in English what you want and then the weights adjust, right? So so the interesting thing is these things sort of emerging as, let's say, little children, very smart children, very knowledgeable children, but you can start to talk to them. Right. And so they're reaching us. They're reaching our ability, our normal communication protocol, language and so on. They they no longer are restricted just to bits and bytes and images and then hopefully something correct emerges. So so I think again, coming back I think I think you're right in the large that we should be very concerned about this thing of what I call alignment. But it's also at the same time our ability, the tools available to us to to do alignment is becoming more accessible, more powerful as well. 

Steven Parton [00:37:33] Yeah, this, this might be an impossible question because it might be pretty akin to the hard problem of consciousness. But I was discussing this conversation actually over the weekend with my friend Amine, and we were getting into this idea or this question of whether the black box is something that can ever be solved if this is an issue that's just temporary and then eventually we'll be able to figure out how to look at those weights and figure out how to trace an output back to its source. Do you think that is something that is feasible or is this is that pretty unrealistic? 

Christopher Nguyen [00:38:06] Well, that's actually some people may dismiss that as as the, you know, a naive question or it could be a very profound question. I'm going to sort of unpack why I think that. I think people may dismiss it. Exactly. Well, I think it's a profound question. Right. But I want people not to misunderstand that it isn't a question at the mechanical level. It seems like a naive question because we know exactly what's going on. Like, we know this better than we know our brains, Right? Because we can actually look at what's happening. We can see the bits in the boat flowing. We actually created all these inference algorithms and we know exactly. Well, the thing is clearly multiplying those numbers and so on. So in that sense, it's a complete white box. Right. But I think you're right. It's a black box in that we don't know really. Right. We don't know. We don't have a good handle how this intelligence is has emerged. Right. In other words, there's layers to this thing. And then somewhere we make a leap. So that leap is kind of the equivalent black box at some level is, okay, I just know this thing works and I'm going to use it right. I'm going to use it for my homework. I'm going to use it for my proposal and so on. So I think maybe the term black box could be problematic with some people who say, I really know what's going on here. So maybe I shifted as sort of the ability to going back to the question of alignment. Right. Is there intent here? Right. How can you say there is no intent if there is emerging understanding and does intent matter? Right. You know, I talked earlier about, you know, it doesn't matter what you intend, the impact is the same. Right. But society as humans, we do sort of we dole out punishment for crime very differently if there is intent or not. Just this morning, there was a question in Congress about, you know, stabbing when feeling like, you know, was it willful ignorance or, you know, or the health care is I love $1,000,000, Right? Yeah. So so so I think that's a that's a challenging question. Right? Because that's why you say correctly. I think it comes back to our sense of consciousness, Right. Consciousness in ten and so on is very these are not very well defined words by the way that we think we know. Right. Because these are sort of very intuitive to us. So it we're not ourselves grounded in a good grammar and a good vocabulary for these words. How can we begin to, you know, if we can't introspect, then how can we inspect these these machines and say they do or do not have this intent? So I think that's that's a that's a fascinating area for research. And I wish, by the way, that the that the branches of psychology, neuroscience and machine learning, computer science would work together better. You know, I think it will. But in the beginning, there's always these turf wars. 

Steven Parton [00:41:08] Absolutely. Do you think that it would be possible, I guess, to I'm going to build on that question a little bit. With something like GPT, do you think it would be possible to learn where the answer comes from in terms of it? The data set that it most heavily pulls on and I mentioned this specifically for something like daily or mid journey, where a lot of things that people are concerned about is that it's basically stealing somebody's art and creating stealing their art style and creating a new piece of art, using their work as the prime data that they were trained on. And there's a question of whether or not we could somehow maybe give some kind of attribution or even, you know, financial support to an artist whose dataset is maybe the largest contributor to the output from one of these systems. Does that seem something like something that's possible or that you think would be an interesting application? 

Christopher Nguyen [00:42:04] Yeah. I haven't thought about this deeply enough, and certainly I don't know enough about the fields to say no. Here's. Here's my answer. I do know I have this mental model of creativity or large classes of creativity. I'm going to say this. I don't want to trivialize it, but I can certainly talk a lot about my own creativity. It has been the intersection of two things, which may be obvious, but that were previously somehow unrelated. And then then you combine two ideas together. By the way, that's why, you know, I inherently believe, you know, pop psychology. That's why creative people tend to be, you know, I was taught to have synesthesia as well, right. Because, you know, their brains are wired in such a way that the different modes kind of intersect. I really think it's a big part, if not all of creativity is about that. Two unrelated things. Maybe in a rare blue moon, three unrelated things coming together. And so, you know, you know, this idea of hyperspace in many, many different dimensions, you have two straight lines and they don't meet. But then in some rare moment in hyperspace, they cross. And that moment you have this burst of creativity. Oh, know, Eureka. I thought of how to combine engineering with with music or something like that. So if you subscribe to that idea, if you believe that idea, then how do we attribute right, the the the the act of combining something, an idea of yours, an idea of mind or a field of yours or the field of mind? If we have if we have always acknowledged this source, but also credit to the person or the thing or the organization that does the actual intersection that I think I think, you know, how much stealing did actually take place, Right. I'm not saying there isn't ever. Right. There's straight political plagiarism. But I guess what I'm saying is don't underestimate the ability or the emergent creativity, at least by this definition. Right. The ability of these machines to combine two heretofore unrelated concepts and come out with something that is I'm going to call it. It's a. Yeah. 

Steven Parton [00:44:28] What do you think then at some as somebody who has worked at the intersection of A.I. and business pretty heavily, that we're going to have to reconsider intellectual property law? 

Christopher Nguyen [00:44:38] Oh, absolutely. IP law, Yes. In fact, I was on the record 2000. I want to see 2014 at a meeting, at a corporate meeting. It was actually Honeywell. The friend of mine, Rhonda, Germany, who was a CTO, the CMO there. And there was an IP lawyer there. Right. And if you have to go back, cast yourself to 2000, 14, 15. So none of this was was obvious. Right? And and we were talking about open source. Right. Back then, open source software. And I made a controversial statement and I said, you know, you know, corporate assets insofar as software and so on. Going forward, it's not going to be source code. It's going to be the models. Right. So the point is that shift alone was what I said, that, you know, there was no resonance. Right. Because a lot of those models. Right. But but, you know, it's pretty obvious now. Right. You know, companies say, well, you know, I'm not going to be the best in the world about these learning algorithms. Right. Not not even public datasets. But my intellectual property are two things. Number one, datasets that are unique to me. Right. And my domain knowledge. That's the that's how I'm going to that. That's how I'm going to innovate and where my edges is not going to be building these shared models. So. That's already happening. So what you say is already true. People have to rethink what is IP, right? The value of those lines of code, those are the learning algorithms. Even if you own them, you know somebody is going to come up with a different algorithm to do to do learning and so on. It's not that valuable. What do you want to protect? Our knowledge, right? We go back to the word knowledge, right? Knowledge, whether it is embedded in these models or knowledge that is embedded in my human experts and my human workers and so on. That's what's going to be unique going forward. Right. And so, absolutely. You know what what people want to protect, whether IP law changes or not. I think these industry, what I call industry intuitions, will reveal themselves. 

Steven Parton [00:46:48] When as we look forward a little bit more, you know, maybe to bring a bit of a closing to the conversation or as somebody who has navigated these waters and then seen the disruption, you know, with your entrepreneurial background and your skills. And I what kind of advice would you give somebody who's interested even into entering this domain? And what are some of the, I guess, most dominant emerging areas where you think people should be paying attention to? 

Christopher Nguyen [00:47:18] Well, I think I think the broadest brush that I can paint is it's an exciting new economy. Right. And that is the contrast with the old economy. And this has been always true, right? If you want to be connected to the new economy, it's not that you have to be an engineer and you don't be a product manager. You don't have to be a podcaster. You have to be a creator, whatever is. But there is this emerging new economy, and it's a new economy with new business models, new tools, new audiences, new customers and so on, which contrasted so with with the old economy of how you how you make money, how you learn, and so on. I think that this is certainly the advice that I give my own children, like make sure whatever you do, that you are connected to the new economy and that means being aware of what's happening and maybe not to the up to the minute threat, but but be very willing to adopt new tool sort of to to keep your mind open, not to be fearful. I don't think fear has ever led to, you know, greatness. Right. Maybe it'll protect you for good the day of the month. But but I think I think there's a reason why, you know, people, creative people, people who create things tend to be optimists. Right? Because you have to believe in the possibilities. Right. So so I would say, you know, some of the things that you and I touch on, right. A number of years ago, it would have been social networks, right? Internet, mobile and so on. And now chat. You bet. Right. There's talk about how it's going to destroy education, but those are still coming from a static assumption. Right. To the world doesn't change. The world always changes. Right? People will change, people will adapt. And so if you sort of go to, you know, Wayne Gretzky that go to where the puck is or try to anticipate it, I have because things are changing faster and faster. So that is that is qualitatively different. Don't assume that you're going to be, you know, taking on one job and stay with it for the rest of your career. Those days are gone. Right. But be very flexible and be maybe willing to just, you know, for some people that maybe learn Python. Right. Pick a data science course. For others, it may be taking on a marketing role that is talking about how, you know, these these foundational models as opposed to something from from the old economy. You know, over the next five years, I think it's going to be a whole bunch of of experiments, companies that are trying to leverage, you know, these foundational models and do transformative things. There are already gimmicks, of course. Right. You know, and then you can temporarily make some money with it. Right. Right. Somebody essay and write them letters and so on. But I do think over the next five years, there's going to be a whole bunch of very large companies, institutions that are founded on this idea of I can take a foundational model and I can apply it right in. Was that Sam Altman called this middle layer right to music, to art, you know, to medicine, to biology, to ColdFusion, whatever. We want to stand on the shoulders of giants. And it is an emerging giant, right? That giant is the foundation of models. So make sure you stay on top of that. 

Steven Parton [00:50:52] Yeah, that's great advice, Christopher. Well, I want to thank you for your time. And before we go, I want to give you a chance to to leave us with any closing thoughts, point people in any direction you'd like to talk about anything at all. 

Christopher Nguyen [00:51:05] Well, you know, if you're interested in if you found these thoughts interesting and want to reach me or talk about that first day as it applies to the industrial economy, we I'm very excited about it. You know, it sounds stodgy, but I like to call it the physical economy, because when you say industrial, people say, oh, that must be like large machines and so on. But now if you eat a fish, you care. You care about the physical economy. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.