Sramana Mitra: What does the ecosystem look like in terms of competitors? Does Portworx have competitors?
Murli Thirumale: Yes, we do. I’ll give you an example of the competitive environment by giving you a few use cases so that we can talk about why our competitors are not able to do what we do. An awesome example is how widely we are deployed.
Portworx, at the time of the acquisition, had about 175 customers. Most of them come from the global 2000 – the high-end set who have the ability to invest in DevOps and new technologies.
I am going to give a couple of examples. One of our best customers is T-Mobile. They use us across several projects. The biggest project deployment is for customer onboarding of people with any Apple device.
I’m a T-Mobile customer, so when I get a new Apple device, whether it’s from the mobile store or physical store, all of that customer data is onboarded on the back end using a containerized system. That includes Pivotal as the container management systems, which is now VMware. It includes Portworx and Pure Storage with Pure as the underlying storage.
They have been using us now for two or three years. For example, there is a big rush coming with the iPhone 12 launch at Christmas and every Black Friday. These were the times that we were at standby. We would get larger orders from them and we would get ready for the onslaught of new devices that they were going to deploy.
They essentially have the ability to spin up a large number of containerized applications to deal with iPhone onboarding, which you can imagine is very uneven through the year. That is one example of our larger deployment.
The second example would be Comcast, another large service provider. One use case that has been in production for the longest time is where they are using set top box data from Comcast and doing searches at the backend. They mine their set top box data using Elasticsearch.
Vast amounts of data are streamed all the time – usage data, customer data, patterns, or an analytics package that uses Elasticsearch. All of that runs on server-based storage.
They have a deep farm of compute servers with lots of storage in them. Most of them are SSDs. We allow them to bring that data in and stream it from the edge all the way to the core and then help them deploy Elasticsearch to mine that data.
The third example is a more recent one. It’s something that exploded during COVID. It’s Kroger. They are using us on Google Cloud GKE along with other databases like MongoDB. They have just recently instituted a smart search running on Portworx. It’s all containerized. The first set has been for Schmidt’s Food stores. For example, when you search with the smart search algorithms and search for a local bleach, it could give you results close to your store and know if it’s in stock or not.
Marks & Spencer did something similar with Deliveroo, which is a DoorDash type of program out in the UK. A lot of these applications are on older infrastructure running containerized applications with Portworx on top. Now let’s get through your question.
What would they do if they didn’t have us? What are their alternatives? The biggest alternatives that people have tried has been to use whatever they have inside in their old storage and try to extend it. Very often, a lot of the storage vendors provide what we call connectors to Kubernetes or containers.
These connectors are exactly what they sound. They try to connect that physical volume in that storage to those containers. It’s not a surprise that it runs out of steam after a certain amount of deployment. They may be good for an initial POC for about ten nodes or so.
As you can imagine, you try to connect physical volumes to these connectors and orchestrate the connectors. It’s impossible to do. It’s slow. There are connection and disconnection problems. It’s just not a modern distributed highly-available system.
It’s just the concept of trying to take an underlying physical storage and trying to make that work across a very large number of user nodes, which is not what it was designed for. The connector model is reaching its end. There are still people using containers with connectors, but around 20 nodes or so need to go to something like us.
The other thing that we do see is distributed solutions. The most common one that we see is an old solution, which was bought by Redhat. That was born in the open stack age. Many of your listeners and readers who are familiar with open stack know that it was born for a different use in a different era. It has been pretzel-twisted to try to make it work for containers.
Like any technology that was designed for something else, it also has a lot of limitations on performance and scalability. There are a few other startups that tried to build what we have. They are different levels of evolution. I would say that in general, most of them are able to sustain a POC, but our strength has always been putting people in production as fast as possible.
That’s really the goal. When you are looking at agility, it’s getting to production as fast as possible at scale. That’s how we differentiate ourselves.
This segment is part 4 in the series : Thought Leaders in Cloud Computing: Portworx CEO Murli Mohan Thirumale
1 2 3 4 5 6