Sramana Mitra: Okay. Let me push you on that a little bit. This is great, by the way. This is exactly what I want our audience to go through. So, let’s take the example of AI enabled drug discovery. It’s one of the most promising areas in my view. It’s what excites me the most, actually.
There is domain knowledge that AI is going to accomplish by training the models, but you also need scientists who understand those domains to be able to do something with that AI.
Is that not a defensible competitive advantage?
David Hornik: I guess we’ll see. It’s interesting you talk about this particular scenario. I taught a class at Stanford Law School this quarter and this was what we talked about. It was a talk about licensing. I talked about AI generated drug candidates going to a company that uses CRISPR to create drugs. Those drugs get to test efficacy using a big corporation that’s good at drug testing.
So the question is, which of these companies is defensible in the long run? Is it the company that is generating the candidate drugs?
I do think that domain expertise will allow for a better purposed vertical solution. That’s what we’re talking about here. Drug discovery is a vertical solution of LLMs, and I think that’s true. You will need some domain expertise to understand what the right inputs are and what the right outputs are.
My suspicion is that’s not a unique knowledge. I think that there are a small number of people who have that knowledge, and therefore the more of the knowledge you have, the better positioned you are.
I can be disabused of this over time, but my belief at the moment is that one of two things will differentiate you. One version is where you will be able to create your own LLM that is somehow uniquely situated to a particular problem solving. So there are some folks who are working on LLMs that are self-validating using the laws of math and physics as opposed to just sort of a standard LLM, which uses estimation. Those will be better at some things and if you have a unique set of formulations, you could create a better LLM for a particular domain, and that will be differentiated and protectable.
So that’s one area. It is certainly possible that one could do that in the drug discovery space. It could be that these drugs, if you have a full understanding of drug candidates that are self validating, if you fully understand the domain, you can create a differentiated LM. So that’s one version where the tech itself is differentiated and a defensible.
The other version is that you have unique information. Ultimately, these LLM models are only as good as the training information you give them. If you are somehow uniquely situated to have training data that someone else doesn’t have, then you could actually have proprietary outputs. It is conceivable that a major university could keep its research to itself for purposes of training LLMs, which will then be better able to create a defensible set of drug candidates because the underlying information it has is better.
Is that good for the planets? That’s an interesting question, right? We have gotten to great innovation over time because of information sharing. So, it will be an interesting thing to see whether we’ve created a world in which the value of global knowledge is easily repurposed in a way that folks will hang onto it for themselves more.
Sramana Mitra: From a very simplistic perspective, I kind of look at the universe and what’s happening in the technology world. I don’t have the knowledge to do drug discovery. I can’t just do this. I just don’t have the depth of knowledge to be able to do this. I’m generally an extremely intelligent person. I have plenty of intellectual horsepower, but it’s not sufficient to do drug discovery. I can do a lot of other things, but I cannot do this.
David Hornik: Here’s what I would say to you. I totally get that. I often tell this story, because I think it speaks to this question.
I had four small children, tightly grouped. So, I spent a lot of time at pediatrician’s offices in the ER with fevers etc. By the time I got to kid number four, when I found myself in an emergency room, I answered all the questions the doctor was going to ask me in advance using the words he was going to use, not because I was a doctor, but because I had had four small children worth of being asked these questions and understanding it. I said, “Here’s an elevated heart rate and this and that.” Whenever I finished the doctor in the ER would say, “Are you admitted at Stanford or are you admitted at another hospital?”
It was because I knew his vocabulary, not because I had some distinct piece of information, but over time, I had acquired all the necessary vocabulary to speak his language in advance of his making the medical diagnosis.
A lot of this stuff is not really about having some deep and unique knowledge. It is about understanding the domain. I do think that if what you chose to do was say, “Okay, I want to understand the drug discovery pipeline, et cetera, and I’m going to leave the science to the LLM”, you might be able to do that in a relatively short period of time.
We have a lot of entrepreneurs probably listening to this who are like, “I don’t know how to manufacture things, but I’m going to become a manufacturing expert”, and do that.
I was talking to a guy yesterday who’s working on a set of bots for customer service. He said he has friends who have these manufacturing facilities, who have asked him, “Could you put your mind to creating a set of solutions for us that would take a bunch of labor out of the process of manufacturing?” He’s a non-manufacturing expert, but he went ahead.
Sramana Mitra: I also feel like I can do that diagnostic related stuff. I’m also doing that all the time within our family when we are struggling with something. Often, the doctor is siloed and doesn’t know another doctor’s situation. I plug it into ChatGPT, Chat GP gives me something, I diagnose. Right. So I think I agree with you that there’s certain things that generally intelligent people using AI can go very far.
David Hornik: I suspect that’s drug discovery.
Sramana Mitra: You suspect that’s drug discovery too. Okay, I still have to think about that. I don’t feel so comfortable that I can do drug discovery.
David Hornik: Give it time.
This segment is part 3 in the series : 1Mby1M AI Investor Forum: David Hornik, Lobby Capital
1 2 3 4 5