The one [Simple] Method used by AI implementers to achieve success

Who do you blame when AI projects fail? Technology? Your machine learning and data science team? Sellers? The data? You can definitely blame the wrong problem solving with the AI ​​or applying the AI ​​when you don’t need it at all. But what happens when you have an application very well adapted to AI and the project always fails? Sometimes it comes down to a simple approach: don’t delay.

At a recent Enterprise Data & AI event, a presenter said his AI projects take an average of 18-24 months to go from concept to production. It’s just way too long. There are many reasons why AI projects fail and a common reason is that your project is taking too long to go into production. AI projects shouldn’t take 18 or 24 months to go from pilot to production. Proponents of best-practice agile methodologies will tell you that it’s the old-fashioned “waterfall” way of doing things that’s ripe for all sorts of problems.

Yet despite the desire to be “agile” with short, iterative sprints of AI projects, organizations often struggle to get their AI projects off the ground. They just don’t know how to do short, iterative AI projects. Indeed, many organizations run their AI projects as if they were research-like “proofs of concept.” When companies start with a proof of concept (POC) project, rather than a pilot project, it sets them up for failure. Proofs of concepts often lead to failures because they are not intended to solve a real-world problem, but rather focus on testing an idea using idealistic or simplistic data in a non-real environment. As a result, these organizations are working with data that is not representative of real-world data, with users who are not heavily invested in the project and potentially not working in systems where the model will actually live. Those who succeed with AI projects have simple advice: ditch the proof of concept.

AI pilots vs proof of concepts

A proof of concept is a project that is a trial or test to illustrate if something is even possible and to prove that your technology works. Proofs of concepts (POCs) are run in very specific, controlled, and constrained environments rather than real-world environments and data. This is largely how AI was developed in research environments. Coincidentally, many AI project owners, data scientists, ML engineers and others come out of this research environment that they are very comfortable and familiar with.

The problem with these POCs is that they don’t actually prove whether the specific AI solution will work in production. Rather, they will only do so if it will work in those limited circumstances. Your technology may work great in your POC, but break down when put into production with real-world scenarios. Also, if you are running a proof of concept, you may have to start over and run a pilot, which will cause your project to run much longer than expected, which could lead to staffing, resource and budget issues. . Andrew Ng encountered this exact problem when he tried to apply his POC approach to medical imaging diagnosis to a real environment.

Proof-of-concept failures exposed

POCs fail for a variety of reasons. The AI ​​solution may have only been trained on good quality data that does not exist in the real world. Indeed, this is the reason cited by Andrew Ng for the failure of their medical imaging AI solution that did not work outside the well-maintained data boundaries of Stanford hospitals. These POC AI solutions could also fail because the model has not seen how real users, as opposed to well-trained people, will interact with it. Or, there is a problem with the real world environment. Therefore, organizations that only run projects as a POC will not have a chance to understand these issues until you get too far along.

Another example of POC failure involves autonomous vehicles (AVs). AVs often work very well in controlled environments. There are no distractions, no kids or animals running in the road, good weather and other common issues that drivers face. The AV works very well in this hyper-controlled environment. In many real-world scenarios, VAs don’t know how to handle many specific real-world issues. There’s a reason we don’t see Level 5 autonomous vehicles on the road. They only work in these very controlled environments and do not function as a driver that can be scaled.

Another example of failed AI POC systems is Softbank’s Pepper robot. Pepper, now abandoned as an AI project, was a collaborative robot intended to interact with customers in places such as museums, grocery stores and tourist areas. The robot performed very well in test environments, but when deployed in the real world, it ran into issues. When deployed in a UK supermarket, which had much higher caps than the US supermarkets where it was tested, Pepper struggled to understand customers. It turns out that it also scared the customers. Not everyone was thrilled that a robot approached them while shopping. Because Pepper was not actually tested in a pilot, these issues were never properly discovered and resolved, resulting in the removal of the entire release. If only they had run a pilot where they first deployed the robot to one or two locations in a live environment, they would have realized these issues before committing time, money and resources to a failed project. .

Build Pilots Against Proofs of Concept

Unlike a POC, a “pilot” project focuses on building a small, real-world test project, using real-world data in a controlled, limited environment. The idea is that you will test a real world problem, with real world data, on a real world system with users who may not have created the model. That way, if the pilot works, you can focus on scaling the project rather than applying a POC to an entirely different environment. Therefore, a successful pilot project will save an organization time, money and other resources. And if it doesn’t work, you quickly find out what the real-world problems are and work to fix those problems so your model works. Much like a pilot guiding an aircraft to its final destination, a pilot project guides your AI solutions to a destination that is production. Why spend potentially millions on a project that may not work in the real world when you can spend that money and time on a pilot that then only needs to be scaled up to a level of production ? Successful AI projects don’t start with a proof of concept, they start with pilots.

It’s much better to run a very small pilot, solving a very small problem that can be scaled up with a high chance of success than trying to solve a big problem with a proof of concept that might fail. This pilot-driven, iterative small success approach is the cornerstone of best-practice AI methodologies such as CRISP-DM or CPMAI that aim to provide guidance on how to develop small pilots using short iterative steps to get quick results. Focusing on the highly iterative, real-world AI driver will base your project on this simple method that many AI implementers see with great success.

About Ethel Nester

Check Also

WA incentive scheme for councils to use more recycled road materials

The Western Australian government is offering local councils a share of $350,000 to use more …