Erfolgreiche datengestützte Unternehmen bemühen sich um eine Verbesserung der Datenqualität bei gleichzeitiger Reduzierung der Organisation und der Vorbereitungszeit der Daten. Das Ziel besteht darin, Daten zu erstellen, die umgehend zur Analyse bereitstehen. Der beste Weg dahin besteht darin, die Daten intelligent zu machen.
As we’ve seen, time is already the enemy of organizations driven by data due to the lag time involved in gathering and preparing data for traditional analysis. According to IDG Research’s 2016 Data and Analytics Survey, 90 percent of respondents have experienced pain points in areas such as data access, data transformation, data creation and collection, data migration, and data storage.
Further complicating matters, data volumes are already huge and only getting bigger, turbo-charging the rate at which data flows into the organization. While companies rely on vital intelligence from that data to optimize the user experience, few can effectively manage to pull quality data at speed from the deluge.
In order to wrest a full measure of worth from organizational data, you need to figure out on the fly which bits are important, make sure data quality is spot on, and add context that turns data into actionable information at the point of collection. When you can do that, you end up with smart data.
The difference between smart data and traditional data collection and analysis methodologies is profound, with implications for everything from improving customer experiences and operational efficiency to reducing security threats.
How Smart Data Helps
While smart data definitions vary somewhat, it is generally considered to be data that is prepared and organized at the collection point such that it is ready and optimized for analytics at the highest quality and speed.
Speaking at a recent conference, Donna Roy, executive director of the U.S. Department of Homeland Security’s Information Sharing and Services Office, said “her teams spend about 80% of their time just searching, ingesting, and getting data ready for analysis,” according to FedTech. Roy believes smart data will make it possible to take the slack out of the process and enable agencies to operate faster and smarter.
FedTech paraphrased Roy’s description of smart data as “data that is independent of software, applications, devices or networks but still is actionable. It’s also data that is self-describing and self-protecting. It has its own context and semantics.” That data is imbued with context, and that context is appended closer to the source of the data.
“Smart data means information that actually makes sense,” Wired reports in the article Big data, fast data, smart data. “It is the difference between seeing a long list of numbers referring to weekly sales vs. identifying the peaks and troughs in sales volume over time. Algorithms turn meaningless numbers into actionable insights. Smart data is data from which signals and patterns have been extracted by intelligent algorithms.”
With traditional analytics, data is amassed, groomed, and then processed on some fixed schedule, say daily or weekly. That workflow means the results are often old by the time the data is considered. Smart data, on the other hand, is accessed and transformed for analytics at the point of collection which helps cut down on data prep time lag.
What does this mean in the business world? First and foremost, smart data helps companies pluck relevant data from the enormous volumes of data they are being flooded with. Knowing what your data is saying earlier is a huge boon in the digital world of business today. Smart data can play a critical role in a slew of activities, from healthcare monitoring and patient care to big data analytics, cloud migration, and network and application performance management.
Think, for example, of the problem cited by Bill Gillis, CIO of the Beth Israel Deaconess Care Organization in Boston in part one of this blog. His organization wanted to get more insight into patient health using claims data, but that data is typically not available for analysis until 90 days after the event that drove the patient to the healthcare organization in the first place. That is obviously too long to react in a meaningful way, rendering the data fairy useless. If the data could be made available sooner, suddenly the organization would have a rich new source of information it could use to help patients.
The following considerations are important to building a smart data strategy for your company:
- Consider the data source. All data sources are not created equal, and it’s important to find the ones that yield the most current and relevant data. For example, some network monitoring tools today use unstructured machine data (log files, SNMP, etc.) that gets indexed and archived for analysis at some point. There are multiple limitations with that approach. One, it results in a ton of data that needs to be sorted through, but also only collects data that can be logged, leaving you with potential blind spots. And two, the process produces old data. Using wire data for network visibility is a better smart data bet: It gives a complete view of what is going on and done right, can be accessed, collected, and transformed in real time.
- Ensure data quality. By some estimates. bad data costs companies, on average, 12% of revenue. In fact, the old garbage in, garbage out chestnut takes on exaggerated meaning given the critical use cases smart data is being used for, from business analytics to operation roles in data security and application performance management. You need a cohesive and consistent approach to building data quality across the organization, something that is baked into the corporate data governance handbook.
- Review the need for organizational changes. Data analysis efforts tend to be centralized, but with smart data the value starts to accrue soon after the data is first assessed, meaning there are more opportunities to act—and act sooner—on data closer to the point of collection. That will have implications for both technology strategies (are you equipped to capitalize on data faster?) and the way your teams are set up to act on the data (will you need a more distributed structure to get the most of your data?).
- Embrace automation. Tools that automate the collection and transformation of data are vital, and the need will only grow as you try to extract value from the ever-growing data volumes coming from an ever-increasing number of sources (read: Internet of Things). There is simply no other way to get out in front of that fire hose and expect to be able to sensibly parse and prioritize and interpret data.
As noted, however, there are different ways to approach the problem. Take smart data applications for network and application performance management, which requires instrumenting the far reaches of the network for full visibility. Hardware-based approaches can prove too costly and difficult to extend to cloud-based environments. That can be a non-starter for many organizations, but since part of smart data’s value lies in being collected at the source, it’s important to find an alternative. In this case, look for software tools that will cut costs while also extending reach across mixed environments such as cloud and virtualized environments.
While perhaps challenging, the ultimate success of digital transformation efforts will swing on how good your data is and how fast you can act on it. Smart data promises to help you make smarter decisions faster, and the companies that do that best will be the ones that come out in front.
How is smart data being used? We’ll outline some examples of NETSCOUT Smart Data in action in the next two articles in this series.
~Written by John Dix. John is an IT industry veteran who has chronicled major shifts in IT since the emergence of distributed processing in the early '80s. An award-winning writer and editor, he was the editor-in-chief for NetworkWorld for many years and an analyst for research firm IDC.