Intel cuts chip test times by 25% with Hadoop
Intel IT has developed a predictive analytics solution which it claims can reduce chip test time by 25% and save $30 million (€23 million) over the next year.
Every chip Intel produces undergoes a rigorous quality check, involving a complex series of tests. The company has been investing in Big Data infrastructure for some time to gain faster insights, but last year the company deployed a new multi-tenant Big Data platform combining a third-party data warehouse appliance with Apache Hadoop.
Using the new platform to gather historical information during manufacturing and combine new sources of information that had previously been too unmanageable to use, a small team of five people was able to slash $3 million off the cost of testing just one line of Intel Core processors in 2012.
Extending this solution to more products, Intel IT expects to realise an additional $30 million in cost avoidance in 2013-2014. Initial test findings also indicate that these capabilities will be instrumental in meeting Intel’s goal of decreasing post-silicon validation time by 25%.
Intel IT is now identifying new use cases to extract even greater value from Big Data and Business Intelligence, and has implemented a range of on-demand self-service BI capabilities for Intel business groups to perform their own analysis and rapidly receive results.
"Predictive analytics is where we want to go with our analytics capabilities, in combination with what we can glean from Big Data," said Chris Shaw, Intel EMEA IT director.
"Up until now, everything has been a trailing indicator. You’ll go and get data from around the environment, you’ll put it into your database, you’ll crunch the numbers, and then it will come out with a report that tells you what happened in the past.
"The predictive piece we’re hoping for is that we can use real-time sensors out there within the environment, which from a chip design validation point of view for Intel might be sensors within our manufacturing capabilities."
Shaw said that one of the biggest challenges that organisations face is having to process lots of unstructured data from a wide range of sources. Pulling that data together with structured data and extracting insight from it can be very difficult, and often requires the help of a data scientist.
"The software can give you that definition, but ultimately that’s just another piece of data. The hardware then has to do the work to say, take that definition, apply it to this data and process it constantly so that you can provide meaningful information rather than just the data stream," he said.
Shaw admitted that calculating the return on investment can also be tricky. However recent research by MIT revealed that companies that are taking advantage of data-driven decisions are generally 5% more profitable than their peer organisations.
He advised starting small in order to create a proof point that can then be used for further investments. In the case of Intel IT’s chip design validation project, the team has now grown from five people to 50, because the company was able to show that there was a significant return on investment.
Intel is now working with Apache to build out a hybrid cloud infrastructure based around Hadoop. The aim of this is to help Apache understand how to take best advantage of the features of Intel’s hardware, and get feedback on the kinds of capabilities that are needed to support Hadoop.
"From an IT-only perspective, we know we have to understand more about the cloud, so if we get economies of scale by doing some work with the external vendor, work with the design teams and still provide the users something that they’re expecting moving forward, then it’s the most valuable way of getting a really good return on investment," said Shaw.
IDG News Service