Revolutionary JHU computer that can crunch 5 petabytes of data!
By ANITuesday, November 2, 2010
WASHINGTON - Computer scientist and astrophysicist Alexander Szalay of Johns Hopkins’ Institute for Data Intensive Engineering and Science, and his colleagues, have developed a tool that will enable analysis of enormous amounts of data from both “little picture” and “big picture” perspectives.
Dubbed Data-Scope, the tool will enable the kind of data analysis tasks that simply are not otherwise possible today, according to Szalay.
“At this moment, the huge data sets are here, but we lack an integrated software and hardware infrastructure to analyze them. Data-Scope will bridge that gap,” he said.
Data-Scope will be able to handle five petabytes of data. That’s the equivalent of 100 million four-drawer file cabinets filled with text.
“The Data-Scope will allow us to mine out relationships among data that already exist but that we can’t yet handle and to sift discoveries from what seems like an overwhelming flow of information,” Szalay said.
“New discoveries will definitely emerge this way. There are relationships and patterns that we just cannot fathom buried in that onslaught of data. Data-Scope will tease these out,” he added.
According to Szalay, there are at least 20 research groups within Johns Hopkins that are grappling with data problems totaling three petabytes.
Without Data-Scope, “they would have to wait years in order to analyze that amount of data,” Szalay said.
“Such systems usually take many years to build up, but we are doing it much more quickly. It’s similar to what Google is doing-of course on a thousand-times-larger scale than we are. This instrument will be the best in the academic world, bar none,” he said. (ANI)