At a recent roundtable, we had Gus Tai, Investor, Board Member and Retired General Partner at Trinity Ventures discuss the implications of human-centric AI, white spaces, comparables, and more.
Sramana Mitra: Today, we’re going to start our conversation with Gus Tai, a very close and old friend of mine. Gus has been a very successful VC in Silicon Valley and also has a long history as an angel investor. Even though both of those perspectives feed into the other, there are nuances in those two hats that Gus has been wearing.
We’ll discuss the topic that has been on the table for the last few months – AI investment thesis. AI is dominating all popular conversations, all investor conversations, entrepreneur conversations, everything right now, and it’s moving at absolute breakneck speed. We have explored the topic from many different angles, but every week, I feel like we’ve moved beyond and there’s more interesting new things to discuss. So Gus, welcome. I’m really thrilled and eager to discuss what’s going on in your brain.
Gus Tai: Thank you, Sramana. I’m looking forward to the discussion. It will be a rich and lively one.
Sramana Mitra: So Gus, what is your general assessment of the AI investment opportunity and the AI entrepreneurship opportunity? They’re obviously highly related, but they’re also not entirely the same. So let’s start with a bit of a general synthesis of your takeaways, and then we’ll dig deeper on many of those insights.
Gus Tai: Definitely. I would start by highlighting that AI refers to a set of techniques that provide intelligence or enhance decision-making when faced with an infinite number of choices. Different techniques have existed for a long time. I realize that your work back in your graduate days involved machine learning and various types of algorithms. There’s a lot of excitement now because of the progress in generative AI, particularly with large language models, but also in multimodal forms of intelligence. This opens up a new set of potential capabilities. So, I would say that this brings us to this ongoing opportunity for new businesses to use advanced algorithms in traditional ways, alongside a surprising number of new applications that can be based on generative AI.
I think there are an infinite number of opportunities. Then, in terms of what those opportunities mean for an investor versus an entrepreneur, I think the uncovered areas still have great potential as investments. I think some of the established companies may not be great investments as new investments, but can still be great companies.
Sramana Mitra: So Gus, in our last 10-12 discussions on this topic, a number of points have come up, and I’m going to quickly synthesize them and then get your perspective on taking that discussion further.
I think there is consensus on the platforms and training these very large language models, the domain of OpenAI, Anthropic, Cohere, and the ones that are doing these extraordinarily expensive ventures that are very heavily funded by the big tech giants. To some extent right now, the sovereign wealth funds are kicking in to that party. But by and large, that is not where most VCs are looking right now. They’re looking more at that layer as a platform as a service, and then vertical AI being built on top of the picks and shovels opportunity around all of that. Especially people running smaller funds can’t really play that big language model game. They’re playing much more in that vertical AI game.
Another perspective that is coming in is the perspective of human-centric AI. Some investors are thinking that it’s too complicated to train people inside Global 2000 or enterprises in general to learn how to use these tools and technologies, even if it’s vertical AI. They’re considering more of a do-it-for-me model, in contrast with the do-it-yourself model – almost like a business process outsourcing, but it’s high-powered AI-enabled business process outsourcing providing very high levels of value aided by AI, but doing the whole function as an outsourced function. This resonates with me well. I think this human-centric AI concept is something that I’m thinking quite a bit about. So I would like to put that on the discussion roster for today and hear what you have to say on that.
We’ve also heard quite a bit on the hallucination topic. Obviously, a lot of these companies are struggling with the hallucination issue, and some of them are coming to the conclusion that why bother with these very large language models? Why not, if you’re doing vertical AI, why not do a small language model that does not hallucinate willy-nilly and build applications using well-trained, but constrained language models? As we know, AI does very well in constrained language models, constrained vocabularies, data structures, etc. So that also resonates with me.
All of this is kind of bringing another set of questions to my mind. I just started reading Yuval Noah Harari’s new book, Nexus. Of course he deals with things at a very large scale kind of species level. We have all these problems that are still very much open problems. Poverty remains an open problem. Large-scale healthcare remains an open problem and large-scale education remains an open problem. To what extent can AI start offering solutions scalable solutions in those domains? How can capitalism models – the venture capital model and the entrepreneurship model aid that evolution? So that’s kind of like setting the agenda. You pick whichever one of these you want to.
Gus Tai: I’ll start with the first topic you mentioned about human-centric AI. I would recommend people who are interested in understanding the trends about human centric AI to read a book called ‘A Brief History of Intelligence‘ by Max Bennett, who was a professor and is now a CEO of an AI startup. In it, he talks about the evolution of human intelligence and how that might inform our understanding of artificial intelligence (AI). AI is meant to be a supplement or a replacement for human intelligence, and generative AI is intrinsically a way of predicting, in a procedural manner, the semantic communication and messaging humans use to convey interests, intentions, and observations. It’s an imprecise method, but that’s the fifth breakthrough in intelligence that Bennett talks about.
I call that out because Vertical AI works with constrained models. These constrained models could either filter out miscommunication among people or allow us to investigate the real underlying model attributes of that system. AI’s actually more accurate and effective when you could model the underlying physics of the arena.
Take chess as an example, where you have a Stockfish outperforming Alpha Zero. But General AI (AGI) is meant to be a supplement to how a human being approaches activity. LLM can predict what is the procedural instruction we would tell another person. Hallucinations arise because it’s not wedded in reality and it’s a feature of LLM. I think why LLMs are still very appropriate for human-centric AI is that human beings in performing have our own hallucinations. We have our own erosion and change in performance – if we are coming in sleepy, or someone is on drugs or having a hangover, they’ll behave in a certain way and we have systems that are robust to deal with variation and performance. Likewise, I think that in human centric AI solutions, hallucinations don’t cost as much. The errors aren’t as expensive and we can build these suggestions procedurally that could help human beings.
Then I would just add a final note on this evolution of planning Or Q star, or whatever they’re calling it in this next generation. Those will dramatically improve the intelligence of a general intelligence system, because it could lay out the steps in a chain of thought reasoning and look back and refer back and say, “Maybe, there’s a hallucination here.”
So, I do think hallucinations will reduce in impact. I do think that AGI will be able to replicate very complicated procedures, anything you could describe how it can be done through instruction. AGI will be to do a good job and that will be a supplement to human design. So, I just see enormous opportunities in human-centric AI.
This segment is part 1 in the series : 1Mby1M Virtual Accelerator AI Investor Forum: With Investor Gus Tai
1 2 3 4