Sramana Mitra: Okay, when did the Automate product come out?
Abhinav Girdhar: That was prior to Design. We launched it during the thick of Covid. Now, we’ve got three of these products doing pretty well.
Sramana: The Automate product is also at a million dollar run rate?
Abhinav Girdhar: Everything is north of that. The core product is north of even $10M. All products are doing pretty well now.
Sramana Mitra: So you said you have one product that is competing with Canva. What positioning gets you the customers who are in the Canva space?
Abhinav Girdhar: We are competing with Canva for the keywords essentially, but we’re not actually competing with Canva. Our focus is more on the AI side of things. We don’t want people to come and design it on a canvas and change things. Our primary focus is – you come, you write in English, and we generate the image for you.
You explain what is on your mind. Supposed you want to create a reel, or a banner, or a birthday card.
You explain it and give the details, You can say, “I want to create a birthday card for my son’s birthday, and this is the venue.” That’s it. We generate that for you.
We are more into the generative AI thing rather than taking a template and then changing things. We don’t want people to spend hours in customizing stuff. Even for the editing, what we are planning is once the first image is created or the first video is created, they describe what changes they want in that image, and we basically make those changes for them. So, English would be the designing language not the skills.
Similarly with the app, like all the other companies we are working heavily on AI. In the new model that we are working on, you describe an app or a website you want in text. For example, you say, “I want a website for my bakery business. I want to sell donuts.” It creates a website for you with a full-fledged store in which you can sell donuts – all using AI.
Sramana Mitra: You talked about workflow automation. Do you have the workflow automated in the context of design? Uou mentioned reels a few times, so let’s say I create a reel. Can i just tell your system to go publish the reel on Facebook or LinkedIn or whatever?
Abhinav Girdhar: We haven’t done that integration on Design yet. Eventually, that’s the plan. The Automate product would definitely be able to do that, but if you want to do that we have APIs available for design and we have APIs available for Facebook. So you can do that integration right now, but it’s not supported in the design platform itself.
Sramana Mitra: I would do that because you know I’m listening to you, and if I put on my product marketing hat, I’d finish that product. It’s like a half product versus a whole product, right? You have the design piece. If I create a reel, well tell me where I’m going to publish the reel and help me manage the marketing of that reel. Otherwise, what is the point?
Abhinav Girdhar: Correct. Eventually, the end goal is definitely to get them to publish not only on Insta but also on Google Shorts and on TikTok as well.
Sramana Mitra: Everywhere – Youtube, LinkedIn. Depends on what is the audience, but wherever the audience makes sense the designer or the small business customer should be able to put it there.
Abhinav Girdhar: The product is still under closed beta. The biggest problem with all these models is that most of these diffusion models are not trained on textual stuff. So, you train millions of images, and what happens is that all these textual things are not read nicely by an optical character reader (OCR). The diffusions models don’t understand how these characters operate essentially because there’re so many images in different formats.
When you go to a DALL E of Open AI or Gemini and tell them to generate an image, it would never get the text correct because it’s not trained for that. So now, we are having to train models to basically get the text right onto these. So it’s under alpha testing.
In that particular model, the biggest problem in videos is that when you create a big thirty second video, consistency in characters is very hard to achieve. For example, you have two characters. You create a video for the first three seconds and then create another video. When you merge them together, the consistency between the two videos would be impossible to achieve. We haven’t yet hit the nail on the head. Open AI is also creating a model called Sora, which would go live soon.
We’ve had a major breakthrough in the text space. We might be coming out with a model that will able to fix one of the largest problem in the diffusion models. We’ll be able to generate the text correctly without any typographical errors. No model allows you to do that right now. So if that happens, it would be a breakthrough of sorts.
Also the video consistency. Once we are happy with the end product, then we’ll start working on integrating with the third party platform. It’s still at the nascent stage.
This segment is part 6 in the series : Bootstrapping a Generative AI Venture Using Services: Abhinav Girdhar, Founder CEO of Appy Pie
1 2 3 4 5 6 7