You are currently browsing the category archive for the ‘IT Performance Management (ITPM)’ category.

Data is an essential ingredient for every aspect of business, and those that use it well are likely to gain advantages over competitors that do not. Our benchmark research on information optimizationvr_Info_Optimization_02_drivers_for_deploying_information reveals a variety of drivers for deploying information, most commonly analytics, information access, decision-making, process improvements and customer experience and satisfaction. To accomplish any of these purposes requires that data be prepared through a sequence of steps: accessing, searching, aggregating, enriching, transforming and cleaning data from different sources to cre­ate a single uniform data set. To prepare data properly, businesses need flex­ible tools that enable them to en­rich the context of data drawn from multiple sources, collaborate on its preparation to serve business needs and govern the process of preparation to ensure security and consistency. Users of these tools range from analysts to operations professionals in the lines of business.

Data preparation efforts often encounter challenges created by the use of tools not designed for these tasks. Many of today’s analytics and business intelligence products do not provide enough flexibility, and data management tools for data integration are too complicated for analysts who need to interact ad hoc with data. Depending on IT staff to fill ad hoc requests takes far too long for the rapid pace of today’s business. Even worse, many organizations use spreadsheets because they are familiar and easy to work with. However, when it comes to data preparation, spreadsheets are awkward and time-consuming and require expertise to code them to perform these tasks. They also incur risks of errors in data and inconsistencies among disparate versions stored on individual desktops.

vr_Info_Optimization_16_information_software_evaluation_criteriaIn effect inadequate tools waste analysts’ time, which is a scarce re­source in many organizations, and can squander market opportunities through delays in preparation and unreliable data quality. Our information optimization research shows that most analysts spend the majority of their time not in actual analysis but in readying the data for analysis. More than 45 percent of their time goes to preparing data for an­al­y­sis or reviewing the quality and consistency of data.

Businesses need technology tools capable of handling data preparation tasks quick­ly and dependably so users can be sure of data quality and concen­trate on the value-adding as­pects of their jobs. More than a dozen such tools designed for these tasks are on the market. The best among them are easy for analysts to use, which our research shows is critical: More than half (58%) of participants said that usability is a very important evaluation criterion, more than any other, in software for optimizing information. These tools also deal with the large numbers and types of sources organizations have accumulated: 92 percent of those in our research have 16 to 20 data sources, and 80 percent have more than 20 sources. Complicating the issue further, these sources are not all inside the enterprise; they also are found on the Internet and in cloud-based environments where data may be in applications or in big data stores.

Organizations can’t make business use of their data until it is ready, so simplifying and enhancing the data preparation process can make it possible for analysts to begin analysis sooner and thus be more productive. Our analysis of time related to data preparation finds that when this is done right, significant amounts of time could be shifted to tasks that contribute to achieving business goals. We conclude that, assuming analysts spend 20 hours a week working on analytics, most are spending six hours on preparing data, another six hours on reviewing data for quality and consistency issues, three more hours on assembling information, another two hours waiting for data from IT and one hour presenting information for review; this leaves only two hours for performing the analysis itself.

Dedicated data preparation tools provide support for key tasks in areas that our research and experience finds that are done manually by about one-third of organizations. These data tasks include search, aggregation, reduction, lineage tracking, metrics definition and collaboration. If an organization is able to reduce the 14 hours previously mentioned in data-related tasks (that including preparing data, reviewing data and waiting for data from IT) by one-third, it will have an extra four hours a week for analysis – that’s 10 percent of a 40-hour work week. Multiply this time by the number of individual analysts and it becomes significant. Using the proper tools can enable such a reallocation of time to use the professional expertise of these employees.

This savings can apply in any line of business. For example,vr_NG_Finance_Analytics_10_data_issues_slow_delivery_of_metrics our research into next-generation finance analytics shows that more than two-thirds (68%) of finance organizations spend most of their analytics time on data-related tasks. Further analysis shows that only 36 percent of finance organizations that spend the most time on data-related tasks can produce metrics within a week, compared to more than half (56%) of those that spend more time on analytic tasks. This difference is important to finance organizations seeking to take a more active role in corporate decision-making.

vr_BDI_09_big_data_integration_starts_with_basicsAnother example is found in big data. The flood of business data has created even more challenges as the types of sources have expanded beyond just the RDBMS and data appliances; Hadoop, in-memory and NoSQL big data sources exist in at least 25 percent of organizations, according to our big data integration research. Our projections of growth based on what companies are planning indicates that Hadoop, in-memory and NoSQL sources will increase significantly. Each of these types must draw from systems from various providers, which have specific interfaces to access data let alone load it. Our research in big data finds similar results regarding data preparation: The tasks that consume the most time are reviewing data for quality and consistency (52%) and preparing data (46%). Without automating data preparation for accessing and streamlining the loading of data, big data can be an insurmountable task for companies seeking efficiency in their deployments.

A third example is in the critical area of customer analytics. Customer data is used across many departments but especially marketing, sales and customer service. Our research again finds similarvr_Info_Optimization_11_innovations_important_for_information issues regarding time lost to data preparation tasks. In our next-generation customer analytics benchmark research preparing data is the most time-consuming task (in 47% of organizations), followed closely by reviewing data (43%). The research also finds that data not being readily available is the most common point of dissatisfaction with customer analytics (in 63% of organizations). Our research finds other examples, too, in human resources, sales, manufacturing and the supply chain.

The good news is that these busi­ness-focused data preparation tools have usability in the form of spreadsheet-like interfaces and include analytic workflows that simplify and enhance data preparation. In searching for and profiling of data and examining fields based on analytics, use of color can help highlight patterns in the data. Capabilities for addressing duplicate and incorrect data about, for example, companies, addresses, products and locations are built in for simplicity of access and use. In addition data preparation is entering a new stage in which ma­chine learning and pat­tern recog­ni­tion, along with predictive analytics techniques, can help guide individuals to issues and focus their efforts on looking forward. Tools also are advancing in collaboration, helping teams of analysts work together to save time and take advantage of colleagues’ expertise and knowledge of the data, along with interfacing to IT and data management professionals. In our information optimization research collaboration is a critical technology innovation, according to more than half (51%) of organizations. They desire several collaborative capabilities ranging from discussion forms to knowledge sharing to requests on activity streams.

This data preparation technology provides support for ad hoc and other agile approaches to working with data that maps to how business actually operate. Taking a dedicated approach can help simplify and speed data preparation and add value by enabling users to perform analysis sooner and allocate more time to it. If you have not taken a look at how data preparation can improve analytics and operational processes, I recommend that you start now. Organizations are saving time and becoming more effective by focusing more on business value-adding tasks.

Regards,

Mark Smith

CEO and Chief Research Officer

vr_Big_Data_Analytics_02_defining_big_data_analyticsTeradata continues to expand its information management and analytics technology for big data to meet growing demand. My analysis last year discussed Teradata’s approach to big data in the context of its distributed computing and data architecture. I recently got an update on the company’s strategy and products at the annual Teradata analyst summit. Our big data analytics research finds that a broad approach to big data is wise: Three-quarters of organizations want analytics to access data from all sources and not just one specific to big data. This inclusive approach is what Teradata as designed its architectural and technological approach in managing the access, storage and use of data and analytics.

Teradata has advanced its data warehouse appliance and database technologies to unify in-memory and distributed computing with Hadoop, other databases and NoSQL in one architecture; this enables it to move to center stage of the big data market. Teradata Intelligent Memory provides optimal accessibility to data based on usage characteristics for DBAs, analysts and business users consuming data from Teradata’s Unified Data Architecture (UDA). Teradata also introduced QueryGrid technology, which virtualizes distributed access to and processing of data across many sources, including the Teradata range of appliances, Teradata Aster technology, Hadoop through its SQL-H, other databases including Oracle’s and data sources including the SAS, Perl, Python and even R languages. Teradata can provide push-down processing of getting data and analytics processed through parallel execution in its UDA including data from Hadoop. Teradata QueryGrid data virtualization layer can dynamically access data and compute analytics as needed making it versatile to meet a broadening scope of big data needs.

Teradata has embraced Hadoop through a strategic relationship with Hortonworks. Its commercial distribution, Teradata Open Distribution for Hadoop (TDH) 2.1, and originates from Hortonworks. It recently announced Teradata Portfolio for Hadoop 2, which has many components. There is also a new Teradata Appliance for Hadoop; this is its fourth-generation machine and includes previously integrated and configured software with the hardware and services. Teradata has embraced and integrated Hadoop into its UDA to ensure it is a unified part of its product portfolio that is essential as Hadoop is still maturing and is not ready to operate in a fully managed and scalable environment.

Teradata has enhanced its existing portfolio of workload-specific appliances. It includes the Integrated Big Data Platform 1700, which handles up to 234 petabytes, the Integrated Data Warehouses 2750 for up to 21 petabytes for scalable data warehousing and the 6750 for balanced active data warehousing. Each appliance is configured for enterprise-class needs, works in a multisystem environment and supports balancing and shifting of workloads with high availability and disaster recovery. They are available in a variety of ratios including disks, arrays and nodes, which makes them uniquely focused for enterprise use. The appliances run version 15 of the Teradata database with Teradata Intelligent Memory and interoperate through integrated workload management. In a virtual data warehouse the appliances can provide maximum compute power, capacity and concurrent user potential for heavy work such as connecting to Hadoop and Teradata Aster. UDA enables distributed management and operations of workload-specific platforms to use data assets efficiently. Teradata Unity now is more robust in moving and loading data, and Ecosystem Manager now supports monitoring of Aster and Hadoop systems across the entire range of data managed by Teradata.

Teradata is entering the market for legacy SAP applications with Teradata Analytics for SAP, which provides integration and data models across lines of business to use logical data from SAP applications more efficiently. Teradata acquired this product from a small company in last year; it uses an approach common among data integration technologies today and can make data readily available through new access points to SAP HANA. The product can help organizations that have not committed to SAP and its technology roadmap, which proposes using SAP HANA to streamline processing of data and analytics from business applications such as CRM and ERP. For others that are moving to SAP, Teradata Analytics for SAP can provide interim support for existing SAP applications.

Teradata continues to advance JavaScript Object Notation (JSON) integration for support of document-oriented databases that are schemaless and semistructured. JSON has become a critical tool as more applications need to store and access data efficiently. NoSQL databases have become more popular recently: 25 percent of organizations in our big data analytics research are using them today, 20 percent  plan to use them within two years, and another 23 percent are evaluating NoSQL. With this focus Teradata provides for its customers application and operational support beyond just supporting data for analytic purposes.

Teradata continues expansion of its Aster Discovery Platform to process analytics for discovery and exploration and also advances visualization and interactivity with analytics, which could encroach on partners that provide advanced analytics capabilities like discovery and exploration. Organizations looking for analytic discovery tools should consider this technology overlap. Teradata provides a broad and integrated big data platform and architecture with advanced resource management to process data and analytics efficiently. In addition it provides archiving, auditing and compliance support for enterprises. It can support a range of data refining tasks including fast data landing and staging, lower workload concurrency, and multistructured and file-based data.

Teradata efforts are also supported in what I call a big data or data warehouse as a service and is called Teradata Cloud. Its approach is can operate across and be accessed from a multitenant environment where it makes its portfolio of Teradata, Aster and Hadoop available in what they call cloud compute units. This can be used in a variety of cloud computing approaches including public, private, hybrid and for backup and discovery needs. It has gained brand name customers like BevMo and Netflix who have been public references on their support of Teradata Cloud. Utilizing this cloud computing approach eliminates the need for placing Teradata appliances in the data center while providing maximum value from the technology. Teradata advancements in cloud computing comes at a perfect time where our information optimization research finds that a quarter of organizations now prefer a cloud computing approach with eight percent prefer it to be hosted by a supplier in a specific private cloud approach.

vr_Info_Optimization_10_reasons_to_change_information_availabilityWhat makes Teradata’s direction unique is moving beyond its own appliances to embrace the enterprise architecture and existing data sources; this makes it more inclusive in access than other big data approaches like those from Hadoop providers and in-memory approaches that focus more on themselves than their customers’ actual needs. Data architectures have become more complex with Hadoop, in-memory, NoSQL and appliances all in the mix. Teradata has gathered this broad range of database technology into a unified approach while integrating its products directly with those of other vendors. This inclusive approach is timely as organizations are changing how they make information available, and our information optimization benchmark research finds improving operational efficiency (for 67%) and gaining a competitive advantage (63%) to be the top two reasons for doing that. Teradata’s approach to big data helps broaden data architectures, which will help organizations in the long run. If you have not considered Teradata and its UDA and new QueryGrid technologies for your enterprise architecture, I recommend looking at them.

Regards,

Mark Smith

CEO & Chief Research Officer

Mark Smith – Twitter

Top Rated

Stats

  • 177,257 hits
Follow

Get every new post delivered to your Inbox.

Join 18,817 other followers

%d bloggers like this: