Finished chips leaving the foundry undergo a series of tests. Testing for critical automotive systems is especially extensive and can increase chip costs by 5-10%. But do we really need to do all the tests?
NXP engineers have developed machine learning algorithms that learn patterns in test results and figure out which tests are really needed and which subset of tests can be safely run without. NXP engineers described this process at the IEEE International Test Conference in San Diego last week.
NXP makes a variety of chips with complex circuits and advanced chip manufacturing techniques, including inverters for EV motors, audio chips for consumer electronics, and key fob transponders to protect cars. These chips are tested with different signals, different voltages, and different temperatures in a test process called continue-on-fail. In that process, chips are tested in groups and all chips are exposed to a full battery, even if some parts fail the test along the way.
The chip underwent between 41 and 164 tests, and the algorithm was able to recommend removing between 42 and 74 percent of those tests.
“We have to ensure strict quality requirements in the field, so we have to do a lot of testing,” says Mehul Shroff, an NXP fellow who led the study. But testing is one of the few knobs most chip companies can use to control costs, since much of the actual manufacturing and packaging of chips is outsourced to other companies. “What we were trying to do here is come up with a way to reduce testing costs in a way that is statistically rigorous and yields good results without compromising field quality.”
Test recommendation system
Shroff said the problem has certain similarities to machine learning-based recommender systems used in e-commerce. “We took a concept from the retail industry where data analysts can look at receipts and see what items people are buying together,” he says. “Instead of a receipt for a transaction, there is a unique part identifier, and instead of a product the consumer buys, there is a list of tests that failed.”
The NXP algorithm then detected which tests failed at the same time. Of course, the question of whether bread buyers want to buy butter is quite different from whether testing a car part at a certain temperature means no other tests need to be done. “We need 100 percent, or close to 100 percent certainty,” Shroff said. “We operate in a different area in terms of statistical rigor than the retail industry, but we borrow the same concepts.”
While the results are rigorous, Shroff says they shouldn’t be relied upon solely. “You have to make sure it makes sense from an engineering perspective and is understandable in technical terms,” he says. “Only then should you delete the test.”
Shroff and colleagues analyzed data from testing seven microcontrollers and application processors built using advanced chip manufacturing processes. Depending on which chips were involved, between 41 and 164 tests were performed, and the algorithm could recommend removing between 42 and 74 percent of those tests. Extending the analysis to data from other types of chips provides even more opportunities to trim the test.
This algorithm is currently a pilot project, and the NXP team aims to expand it to a wider range of parts, reduce computational overhead, and make it easier to use.
From an article on your site
Related articles on the web