By Sramana Mitra and guest author Shaloo Shalini
SM: What you delivered to the Automotive Aftermarket Industry Association was a metadata structure of sorts?
JP: Yes, exactly.
SS: In terms of business model, we built ours on the reverse market adoption model. Once we realized that the legacy messed up this process and that making it work is really based on the classic enterprise application integration (EAI) approach, which is a problematic approach, we decided to get the super spec done. Having realized this early on, we went to a data-driven model, and for the data-driven model to work we have to canonize the data. Therefore, Jason’s point was to re-create the super spec. An important approach we took here is that we donated the super spec to the industry, which then created a board of directors from all across the industry including buyers, retailers, sellers, system providers, and ERP providers, and those people sit at the table and are driving the super spec adoption. We maintain the super spec for this board. That has allowed us to drive this approach forward because we have such a large ecosystem. These people are evangelizing around Jason’s super spec and the canonization of that data and the metadata. We already have the market adoption for this data-driven model. We maintain the business rules but the architecture, the super specification, we have donated to the industry.
SM: Who owns the data itself? Are those 10 million parts that are floating through this ecosystem sitting in a database? Where is this data categorized based on the right taxonomy, and where does that data sit?
JP: The individuals involved in the process own that data. The suppliers and the manufacturers own that data because they are the ones that are producing those parts and defining part numbers out of their systems. They are selling and capturing that information. However, there are other entities that own portions of that data. There are catalogue providers, those who create online catalogues and on behalf of these suppliers. There is a new application that we have in the cloud called virtual inventory cloud. That is actually taking their vendor inventory information. When we get that file, we run it through a transformational process and load it a repository of the cloud for access through one of our applications for availability.
Some of these suppliers do not send files. They do manage, hold, and obtain close control over their inventory data information. They are willing to expose that information to us but through secure methods. They use a Web services method that follows a standard called Internet parts ordering (IPO). That is a Web services–based call, which is wrapped in security that goes directly to that supplier’s system asking for a particular part. That supplier’s system would kick back a Web services–based response saying, yes, it is available; here are the rules on how to order it, and so forth. Then you can turn around and kick off an order over the Web to the services supplier that would hit their business system, and they would ship the part and trade invoices and all of that stuff.
This brings us to the cloud story: What is the essence of why the cloud is important? Well, the essence is that we have used the cloud to fulfill a story within automotive aftermarket that suits the cloud paradigm and meets our industry’s needs. What we have created as the virtual inventory cloud is solving a problem of order supply chain, but it give you visibility into these 10 million parts wherever they are, regardless of your technical capabilities. We have created this very low-cost model for you to gain access to something that you had no visibility of before. Because of that, you now have visibility, not just visibility but visibility in a very quick manner. Earlier you might have been able to find out what that part is, then called somebody and went through, say, five or 10 minutes of process trying to get something done in a traditional model. We have streamlined the process using the cloud paradigm and brought it down to 15 seconds. What you end up with as a user is a special order process. If a special order process took 15 minutes traditionally and a company was able to do only 20 of those in a day, now it is able to do 120 or more in a day.
SM: In these two different modes you have described of creating this repository, one through suppliers loading their specifications onto your system and another through a Web services based–call to their system, what is the split of the 10 million parts inventory that is out there? How much of that is on the repository in each of those modes? How much do you still have to cover?
JP: I would say it is going to end up in an 80/20 split in the weeks to come. We have probably about 3 million parts on our systems today, so we still have a lot more to on-board.
SS: Well, you see what happens, Sramana, it is about 10 million and we fully load the 3 million active ones. Having the 3 million on our system is probably going to cover about 85% of the non-special order enquires. In terms of the overall 10 million, 8 million will end up being not in the system and 2 million will access to some type of remote call like the Web services call.
SM: OK.
SS: The 80/20 rule would be the cut off. The Pareto law would rule in this case.
SM: Can you tell to me more about the architecture of your cloud solution?
SS: Ok. This is where it gets cool, Jason. This is the fun stuff and Jason would like to answer it.
JP: In terms of architecture, if you look at the ground part of it, there are two major parts. We have an on-premise network that is our traditional network, and we have been using that for doing business for a while. Then there is this canonical data model, and our super spec has been implemented throughout that. For all of the data files that we get in, they are mapping and translations to whatever format is there in the super spec. We have a cloud or an off-premise setup as well that is based on the cloud model.
These two work together to deliver and transform data in a quick, streamlined, and secure manner between our networks, our traditional electronic data interchange (EDI)–based network and the clouds for this inventory files. This is important because if we are getting masses of inventory files in small increments, we have to have an environment that will process those quickly, efficiently, and securely and then pass them onto a cloud in an architecture that does what is best for the data in the cloud, send to the repository and make it available to the world. That is how we have designed this first part of architecture.
The second part of how these two aspects play together is through their single sign-on model for authentication and authorization. When a person logs in, we have a two-base entry point into the cloud; the first one is through a Web portal that we have called virtual inventory cloud. Anyone can go into that Web portal and log in. Once people put in their password credentials, they will go into our on-premise system for authentication, and then the system will establish a tight and secure trust between our on-premise and cloud systems. It will then go to the cloud and get what that user is authorized to do and feed it back. It will build their streams and present their part data, and the vendors can collect from that all the information based on rules. It will show industry rules on how they can see things, what they can see, and how to display it and to whom. Security and all of those parameters are built in to that cloud.
This segment is part 3 in the series : Thought Leaders In Cloud Computing: Steven Smith, President And CEO, Gcommerce Inc.
1 2 3 4 5 6 7