The essential data buzzwords
11 July 2016 | 0
Take one major trend spanning the business and technology worlds, add countless vendors and consultants hoping to cash in, and what do you get? A whole lot of buzzwords with unclear definitions.
In the world of big data, the surrounding hype has spawned a brand-new lingo. This glossary of sorts highlights some of the main data types you need to understand.
The shining star in this constellation of terms is “fast data,” which is popping up with increasing frequency. It refers to “data whose utility is going to decline over time,” said Tony Baer, a principal analyst at Ovum who says he coined the term back in 2012.
These would be sources such as Twitter feeds and streaming data that need to be captured and analysed in real time, enabling immediate decisions and responses. A capital markets trading firm may rely on it for conducting algorithmic or high-frequency trades.
“Fast data can refer to a few things: fast ingest, fast streaming, fast preparation, fast analytics, fast user response,” said Nik Rouda, a senior analyst with Enterprise Strategy Group. It is “mostly marketing hype,” but it “shows the need for performance in a variety of ways.”
Increased bandwidth, commodity hardware, declining memory prices and real-time analytics have all contributed to the rise of fast data, Baer said.
At the opposite end of the spectrum is “slow data,” or data that might trickle in at a comparatively leisurely pace, warranting less-frequent analysis. Baer points to a device that monitors ocean tides as an example — for most purposes, real-time updates are not needed.
In general, this kind of data is better-suited for capture in a data lake and subsequent batch processing.
“Small data” is “anything that fits on one laptop,” said Gregory Piatetsky-Shapiro, president of analytics consultancy KDnuggets. Essentially, the term recognises the fact that “a lot of analysis is still done on one or a few data sources, on a laptop, using lightweight apps — sometimes even just Excel,” Rouda said.
As for “medium data,” it is in between.
When you are talking about many petabytes of data, that is big data, and you would likely use technologies such as Hadoop and MapReduce to analyse it, Baer said. But “most analytic problems don’t involve petabytes,” he added. When analyses involve data on a more intermediate scale, that is medium data, and you would likely use Apache Spark.
“Dark data” is typically data that gets overlooked and underused.
“People don’t know it’s there, don’t know how to access it, aren’t allowed access, or the systems haven’t been set up to leverage it yet,” Rouda explained. It crops up “all too often” in databases, data warehouses and data lakes, he said.
Such restricted or poorly documented pools of data are often referred to as the “dark web.” Bringing them to light is generally the domain of data-discovery services, often using machine-learning algorithms, Baer said.
Last but not least, “dirty data” is nowhere near as fun as it sounds. Rather, it is simply a data set before it gets cleaned up.
“A matter of nature is that things are dirty until you clean them,” Baer said. “Unless you’ve performed some operation on it, data is not going to be clean.”
Those operations can include preparation, enrichment and transformation, Rouda noted. “Otherwise a lot of wrong answers are possible.”
One more thing…
Using data to grow your business is a lot more than just understanding the lingo.
“There’s a gap between all the data that has become available and our ability to use it for insight,” said Brian Hopkins, a vice president with Forrester.
Bridging that gap could be a matter of using Hadoop, or it could be accomplished through simple self-service tools, Hopkins said. Either way, it is the link that has to be made in order for meaningful action to result.
“Vendors and analysts are great at creating new buzzwords,” he said. Rather than getting bogged down in terms, “my advice for CIOs is to stay laser-focused on outcomes that will transform your business.”
IDG News Service