Menu Close

IBM chief data scientist makes the case for building AI factories

IBM distinguished engineer and chief data scientist John Thomas contends that for organizations to really embrace AI, they need to adopt a factory model that automates as much of the model building process as possible.

Just like a traditional factory would create physical products reliably at scale and at speed, an AI factory would enable businesses to quickly build and scale trustworthy AI models.

VentureBeat caught up with Thomas to gain a better understanding of how an AI factory would actually work.

VentureBeat: What are the biggest challenges organizations encounter today with AI?

John Thomas: There are a few recurring themes we have been running into. Pretty much every large customer we work with has a data science team. They already have some kind of a data science project going on. But many of those projects are just experiments. They don’t make it into production, and even if they do, it just takes forever to get things from concept into production.

When we start digging into why it’s happening and what’s going on, there’s a number of different things. Sometimes it is a misalignment between what the business expects and what the data science team is building. Sometimes it’s about the model — it’s great in development, but organizations are having a hard time getting it through the model validation and risk management processes and get it approved for deployment in production. Sometimes it’s about what happens after it goes into production. All of these are our challenges outside the actual model building piece itself.

Not enough attention has been given to those different stages of the lifecycle. That’s what we keep seeing again and again, even with some of the most advanced data science teams. They’re super talented in terms of using the algorithms and the libraries and the frameworks to do the model building, but when it comes to deployment, management, monitoring, and aligning that to ongoing business impact, it seems to be problematic.

VentureBeat: How does this get fixed?

Thomas: Software development went through this phase a long time ago. Application developers were just writing code, and it was just difficult to get it all in production. You needed a structured approach, and DevOps came around. It’s the same kind of mindset, but now in the world of AI and machine learning. Just like a physical factory has got a set of processes, a set of best practices, and people with certain skills to produce some goods at scale and speed. You need a similar construct. You need people, process, and technology.

If you look at the different stages of the lifecycle, the first part of planning and scoping is a major step. IBM uses design thinking to tease out all of the aspects of the project in a very structured way. The next stage is data exploration, and the third stage is the model building. That’s when you start to look at trustworthiness and if the data is biased. All of the challenges around trustworthiness should be part of the model build stage itself. Then the next stage is the validation and deployment stage, where we set best practices. A validation team, which is separate from the model development team, has to come in to run the validation performance metrics, check for fairness, check for providing the explanations of the model, produce reports, and make sure certain criteria or thresholds the business has defined are met. The final stage is ongoing monitoring and management. This is where you have guardrails in place for checking the ongoing performance of the model. Once you set this up, it’s just like a physical factory.

VentureBeat: Whose job is it to build this AI factory?

Thomas: It’s usually not the data science team because they don’t want to be in the middle of any of this. What we have seen is, it’s the stakeholders. Each line of business has its own data science team chugging away at a bunch of models as part of a hub and spoke construct. The person who cares about consistency and scale across these lines of businesses is a person to champion the setting up of a factory. They will have people from different departments participate in the factory. IBM helps them set up the factory.

It’s not as if everything has to flow through it. That’s not what we’re saying. We are saying the spokes have the freedom to innovate, but they follow the same guidelines. They follow the same design thinking process for scoping and creating the action plan. They follow the same model governance. They have complete freedom in what algorithms and frameworks they use.

VentureBeat: Where do machine learning operations (MLOps) and DevOps fit within that factory?

Thomas: I didn’t use the term MLOps because there’s so much more that’s needed beyond MLOps. Understanding the trustworthiness, bias, fairness, explanations, and so on. The very nature of AI and ML is it’s a probabilistic, not deterministic, paradigm. It’s not something that a typical application development paradigm has to deal with.

VentureBeat: Do the AI factory and the software development factory need to merge?

Thomas: At a very high level there are similar constructs, but there are unique challenges to be addressed in the world of AI. I don’t think AI factories and software development factories will all become just one thing. There will be similar constructs and similar paradigms, but unique challenges need to be addressed uniquely.

VentureBeat: A factory implies automation. How will the data engineering process be automated?

Thomas: I don’t think we are at the point of automating everything. We want to automate the high labor-intensive, manual, boring tasks as much as possible. If you’re working with a very large data set with hundreds or thousands of features, it’s a pretty boring, manual, labor-intensive work. You want to rely on automation as much as possible. Creating a pipeline for model deployment should be automated, but with a human in the loop. It is about making sure the domain experts are used in the right way along the different stages of the lifecycle while automating some of the more mundane tasks. That’s the reality of where we are.

VentureBeat: We hear about the democratization of AI all the time, where end users are going to be building their own little AI frameworks. How does that fit within a factory model?

Thomas: We look at the different stages all the way from the beginning. Even before a single line of Python code is written, you need the business owner to be part of the scoping and planning stage itself. A lot of times the data science team is running after data science metrics. ‘My model is great because look at the precision.’ But how that translates into the actual business KPI (key performance indicator) is not very clear. Sometimes it’s not. Being able to understand how your model relates to the business KPI upfront before a single line of coding is done is important. You need to have the business be part of this life cycle.

VentureBeat: A lot of end users are being told they don’t need a data science team to build an AI model as part of the democratization argument. Where is the line between the two?

Thomas: There are tools that lower the barrier of entry for sure, but at some point, you need the domain expert and the data science person. Unless the businessperson and the data scientists are working hand in hand, you cannot get that last mile. You cannot get to something that will go into production. You can’t just have data thrown into a magical box to produce AI. It’s not real.

Article: IBM chief data scientist makes the case for building AI factories

Leave a Reply

Your email address will not be published. Required fields are marked *