Spotlight Series: Data + BI - TechHQ Technology and business Mon, 07 Aug 2023 13:44:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 El agente secreto de la IA: cómo la automatización puede resolver los problemas en su centro de contacto https://techhq.com/2023/07/atencion-al-cliente-centro-de-contacto-ai-automatizacion-soluciones/ Mon, 31 Jul 2023 16:33:13 +0000 https://techhq.com/?p=226751

A medida que los consumidores interactúan con tecnología más sofisticada, se eleva el nivel de lo que esperan de la atención al cliente. Descubra los beneficios de aumentar sus agentes de soporte con soluciones impulsadas por IA.

The post El agente secreto de la IA: cómo la automatización puede resolver los problemas en su centro de contacto appeared first on TechHQ.

]]>

La tecnología de consumo está cada vez más avanzada, en parte gracias a los gigantescos avances de la inteligencia artificial (IA). Si bien los avances en esta tecnología llevan años gestándose, en la actualidad son muchas las personas que están tomando conciencia de su poder, sobre todo desde el lanzamiento de ChatGPT y otras IA generativas.

A medida que interactuamos con dispositivos y software más sofisticados, crece la expectativa de los clientes en relación con sus experiencias digitales con las marcas. Y ello incluye las interacciones con los operadores de atención al cliente.

Pero satisfacer estas expectativas cada vez más exigentes no siempre es algo tan sencillo.

Soluciones de IA Fuente: Net2Phone

Soluciones de IA Fuente: Net2Phone

Los servicios digitales de los centros de contacto se actualizan constantemente para ayudar a las empresas a adaptarse a las crecientes exigencias de sus clientes. Sin embargo, si no se realiza con un proveedor adecuado y de forma correcta, intentar integrar todas las funciones y los canales de comunicación puede dar lugar a un sistema desarticulado, que repercute en la eficacia de las interacciones y en la experiencia general del cliente (CX).

Si se ofrecen, sin una estrategia y plataforma adecuada,  distintos canales para chats, correos electrónicos, redes sociales y llamadas de voz, los clientes corren el riesgo de verse expuestos a distintos métodos y niveles de contexto por parte de los operadores. Es posible también que sean redireccionados de un canal a otro, con la consiguiente pérdida del historial de interacciones durante el proceso. Los clientes pueden tener que repetir sus datos y frustrarse, llevándose una experiencia negativa.

Soluciones de centros de contacto en la nube para empresas, Fuente: Net2Phone

Soluciones de centros de contacto en la nube para empresas, Fuente: Net2Phone

Un número excesivo de canales también dificulta que los operadores con los que hablan les ofrezcan una experiencia personalizada. Esto debería ser una prioridad, ya que el 71 % de los consumidores esperan interacciones individualizadas por parte de las empresas. Tener que navegar por múltiples plataformas para hilvanar el recorrido del cliente dificultará la capacidad del operador para ofrecer una asistencia eficiente y personalizada, lo que reducirá las tasas de conversión.

El disponer de todos estos canales y funcionalidades puede plantear una serie de problemas diferentes a la gerencia. Cada flujo de trabajo complementario —que no deja de perfeccionarse gracias a las actualizaciones de software— se traduce en un mayor volumen de datos que hay que analizar al momento de tomar decisiones fundamentadas. Esta sobrecarga de datos también plantea retos a la hora de almacenar, procesar y extraer información procesable si no se dispone de estrategias y herramientas integrales.

¿Es la IA la respuesta?

Para atender las crecientes expectativas de los clientes de recibir experiencias personalizadas, así como la cantidad cada vez mayor de datos que recopilan los servicios de los centros de contacto, los proveedores deben aprovechar el poder de la IA.

Una forma popular de hacerlo es incorporar incialmente un chatbot de IA de texto generativo al sitio web, pero esta puede resultar una estrategia arriesgada. Si no se selecciona el proveedor La tecnología sigue expuesta a imprecisiones y sesgoslo que puede perjudicar a la marca responsable de implementarla.

En lugar de confiar solo a un chatbot la atención de los clientes, TechHQ busca las formas más sofisticadas en que las empresas pueden utilizar la IA para mejorar la experiencia de sus clientes y empleados y, en consecuencia, hacer crecer su negocio.

  • Los resúmenes de llamadas mejoran la eficacia de los operadores

En lugar de que el operador tenga que tomar notas manualmente durante o después de una llamada, la IA puede autogenerar un resumen en cuanto el cliente finaliza la conversación y cuelga el teléfono. Convierte los datos recogidos durante la llamada en puntos de acción y pasos a seguir, que pueden utilizarse para redactar correos electrónicos de seguimiento. Esto no solo permitirá a los operadores ser más eficientes, sino que también podrán ofrecer una atención al cliente personalizada con más facilidad. Esto se debe a que los clientes únicamente recibirán la información que precisen y mantendrán una conversación fluida a través de todos los canales.

  • El análisis detallado de las llamadas mejora la experiencia del cliente

Los nuevos algoritmos pueden registrar métricas que una persona no podría, como el número de palabras que se hablan por minuto, los momentos en los que los clientes y los operadores se interrumpen entre sí y la proporción entre hablar y escuchar en una conversación. Analizando estos datos, la IA puede identificar tendencias sutiles que contribuyen al éxito de los operadores. Esta valiosa información puede utilizarse para optimizar la capacitación, mejorar los procesos de gestión de llamadas y, en última instancia, mejorar la CX en general.

  • Sugerencias para reducir la brecha de habilidades

Según un informe de Cresta Insights, la tasa de rotación de empleados en los centros de atención telefónica enfocados al servicio de asistencia es aproximadamente 1,3 veces superior a la media estadounidense. Esta elevada tasa de rotación provoca un déficit de habilidades, ya que el nuevo personal necesita tener la capacitación o la experiencia adecuada para atender las complejas consultas de los clientes. Sin embargo, la integración de la IA en su centro de contacto puede ofrecer una solución.

Existen algoritmos que pueden generar automáticamente sugerencias para la mejora a partir de un análisis exhaustivo de las llamadas, capacitando al operador sin necesidad de una intervención constante de los gerentes. De este modo, el personal recién contratado puede mejorar rápidamente, cerrando la brecha de habilidades y reduciendo el índice de deserción a largo plazo.

  • El análisis del estado emocional permite una resolución rápida de los problemas

El análisis del estado emocional identifica con precisión el tono de voz y el contexto, ofreciendo información crucial para las interacciones con los clientes. Permite a los operadores adaptar su enfoque para que puedan abordar las inquietudes con mayor rapidez, al tiempo que permite a los gerentes identificar áreas de mejora en el servicio al cliente. Por otra parte, el análisis del estado emocional sirve como herramienta para que los gestores y los operadores prioricen las interacciones de alta prioridad o de riesgo, lo que les permitirá asignar los recursos de forma eficaz y garantizar una resolución proactiva de los problemas.

  • Una transcripción precisa acelera la evaluación de las llamadas

La tecnología aplicada a la IA puede permitir una transcripción y una grabación más precisas de las llamadas. Esto facilita el proceso de revisión posterior a la llamada, lo que propicia la rápida identificación de oportunidades de mejora para los operadores. Los supervisores también se benefician de ello, ya que pueden llegar a comprender mejor las llamadas y los incidentes que tienen lugar en la empresa sin necesidad de escuchar cada conversación. Esta mejora de la eficacia en la revisión y el análisis de las llamadas supone un ahorro de tiempo, a la vez que ofrece a los supervisores la posibilidad de transmitir comentarios específicos a los operadores. Su rendimiento mejorará en última instancia, al igual que la calidad general de las interacciones con los clientes.

Para mantenerse al día con las crecientes demandas digitales de los clientes, este es el momento ideal para incorporar la IA a su arsenal de herramientas de asistencia. Obtenga más información sobre cómo net2phone puede aprovechar las soluciones de voz sobre protocolo de Internet (VoIP) gestionadas por la IA.

The post El agente secreto de la IA: cómo la automatización puede resolver los problemas en su centro de contacto appeared first on TechHQ.

]]>
The Secret AI-gent: How automation can solve your contact center problems https://techhq.com/2023/07/customer-support-contact-center-ai-automation-solutions/ Mon, 31 Jul 2023 16:19:00 +0000 https://techhq.com/?p=226733

As consumers interact with more sophisticated technology, it raises the bar for what they expect from customer support. Discover the benefits of augmenting your support agents with AI-powered solutions.

The post The Secret AI-gent: How automation can solve your contact center problems appeared first on TechHQ.

]]>

Consumer technology is growing increasingly advanced, partially thanks to the huge leaps forward made in artificial intelligence (AI). While developments in this technology have been brewing under the surface for years, many are now becoming more aware of its power, particularly since the recent release of ChatGPT and other generative AI.

As we interact with more sophisticated devices and software, it raises the bar for what customers come to expect from their digital experiences with brands. That includes interactions with support agents.

But meeting these heightened expectations is not always straightforward.

Digital contact center services are constantly being updated to help businesses keep up with their customers’ rising standards. However, trying to integrate all the latest must-have features and communication channels can lead to a disjointed system, which impacts the effectiveness of interactions and overall customer experience (CX).

Cloud Contact Centre Solutions for Businesses, Source: Net2Phone

Cloud Contact Centre Solutions for Businesses, Source: Net2Phone

If you offer different channels for chats, emails, social media, and voice calls, customers are at risk of being exposed to varying approaches and levels of context from agents. They can also be deflected from one channel to another, losing the interaction history in the process. Customers may have to repeat their details and get frustrated, leaving them with a negative experience.

An excessive number of channels also makes it difficult for the agent they are speaking with to provide a personalized experience. This should be a priority, as 71 percent of consumers expect tailored interactions from companies. Having to navigate multiple platforms to piece together the customer’s journey will hinder the agent’s ability to deliver efficient and customized support, lowering their conversion rates.

Having all these channels and features can pose a different set of issues for managers. Each additional workflow – which is continuously becoming more advanced through software updates – results in more, and larger, pools of data to sift through when needing to make informed decisions. This data overload also poses challenges in storage, processing, and extracting actionable intelligence without comprehensive strategies and tools in place.

Cloud Contact Centre Solutions for Businesses, Source: Net2Phone

Cloud Contact Centre Solutions for Businesses, Source: Net2Phone

Is AI the Answer?

To cater to customers’ heightened expectations for personalized experiences, as well as the increasing amount of data that contact center services collect, providers must harness the power of AI.

A popular way of doing so is to hastily add a generative AI chatbot to the website, but this is a risky strategy. The technology is still prone to inaccuracies and bias, which can reflect poorly on the brand responsible for implementing it.

So, rather than letting a chatbot loose with customers, TechHQ looks at more sophisticated ways companies can utilize AI to improve their customer and employee experience and grow their business as a result.

  • Call summaries improve agent efficiency

Rather than the agent manually having to take notes during or after a call, AI can autogenerate a call summary as soon as the customer puts the phone down. It turns data collected during the call into action items and next steps, which can be used to compose follow-up emails. Not only does this allow agents to be more efficient, but they can also more easily provide a personalized CX. This is because customers will only receive the necessary information, and will maintain a seamless conversation across channels.

  • Deep call analytics enhance customer experience

New algorithms can take note of metrics that a person could not, such as words spoken per minute, instances of callers and agents talking over each other, and talking-to-listening ratio. By analyzing this data, an AI can identify subtle trends that contribute to agent success. This valuable insight can be used to optimize training, improve call-handling processes, and ultimately enhance the overall CX.

  • Suggestions for improvement close the skills gap

According to a report from Cresta Insights, the employee attrition rate at customer call centers that focus on support is about 1.3 times higher than the US average. This high turnover rate leads to a skills gap, as the new staff need to have the appropriate training or experience to deal with complex customer inquiries. However, integrating AI into your contact center can offer a solution.

Algorithms can automatically generate suggestions for improvement based on deep call analysis, coaching the agent without the need for constant managerial input. The newer staff can then improve quickly, closing the skills gap and reducing the attrition rate in the long run.

  • Sentiment analysis allows for fast issue resolution

Sentiment analysis accurately identifies voice tone and context, providing crucial insights into customer interactions. It allows agents to adapt their approach so they can address concerns more promptly, while also enabling managers to identify areas for improvement in customer service. Moreover, sentiment analysis helps managers and agents prioritize high-priority or at-risk interactions, allowing them to allocate resources effectively and ensure proactive issue resolution.

  • Accurate transcription speeds up call reviews

AI technology can allow for more accurate transcription and call recording. This makes for an easier post-call reviewing process, leading to the swift identification of opportunities for agent improvement. Supervisors benefit from this too, as they can gain a deeper understanding of calls and incidents taking place within the business without needing to listen to every conversation. This enhanced efficiency in reviewing and analyzing calls saves time while empowering supervisors to provide targeted feedback to agents. Their performance will ultimately be improved, as will the overall quality of customer interactions.

To keep up with the increasing digital demands of customers, there has never been a more important time to add AI into your support arsenal. Learn more about how net2phone can leverage AI-driven Voice over Internet Protocol (VoIP) solutions.

The post The Secret AI-gent: How automation can solve your contact center problems appeared first on TechHQ.

]]>
Developing analytical applications at the speed of data with Tinybird https://techhq.com/2022/11/data-ingress-process-present-etl-elt-build-apps-fast-usp-best-platform-mysql-kafka-postresql/ Fri, 18 Nov 2022 15:28:42 +0000 https://techhq.com/?p=219484

We cover Tinybird, the solution that's replacing complex data instances, making DevOps look good and CFOs smile. Create your app's USP, simply and fast.

The post Developing analytical applications at the speed of data with Tinybird appeared first on TechHQ.

]]>

Marketing executives in the tech world are fond of broad statements like “moving at the speed of data.” Regardless of how you might interpret that, one thing is certain: gaining utility from data is far from a speedy process. At a time in the evolution of technology where rapid improvements in hardware enable massively parallel processing and multi-threading are par for the course, and even a modest laptop comes with eight or more processor cores, outsiders to the data “world” would then be forgiven for thinking that something doesn’t add up. With all these resources on tap, why do even “cutting edge” data-intensive applications seem to come with built-in delays when presenting information that’s theoretically available in just a few milliseconds?

In reality, getting data from creation to useful presentation is quite hard. Batch processing of data still dominates the data landscape, so most data consumers are forced to wait for the insights that data might offer. Data warehouses have admirably addressed the business intelligence needs of organizations with widespread and disparate data sources, but the sheer vastness of the data and the complexity to analyze it means that the average dashboard shows analytics on data that’s hours or days old. Executives and decision-makers have put up with this for a while, but the average consumer expects better. Thanks to the public internet, gigabit bandwidth, and the smartphone, consumers demand applications that respond right now. Slow data might work for internal business intelligence, but it doesn’t work for user-facing applications.

Whether you’re an IT manager, a data engineer, a developer, or a business analyst, you understand that the processes involved in Extract, Transform, Load (ETL) operations – or any variations thereon – are complex, and complexity eats resources, especially time. It is not simple to ingest, normalize, process and present information in a way that is useful, but companies want to create user-facing applications and services based on data. If they succeed, it can be highly lucrative. If they fail, a competitor may edge them out. Startups and the careers they support live and die by the cleverness of code and the responsiveness of applications in the hands of their users.

An effective data-based application has to have its ducks in a row. It must ingest the freshest data, transform it into valuable analytics, and present those results to the user, all in a timeframe that’s acceptable (or better) to the user. In reality, this means milliseconds.

But ducks are cumbersome beasts. In contrast, Tinybird has become an accelerating force for application teams that need speed, scale and simplicity when building with data. Where application backends might once have been constructed from an amalgam of databases, orchestrators, stream processors, and the other trappings of the “modern data stack”, Tinybird offers the ingest-to-publish data pipeline on tap. And it’s a solution that developers love because it’s delightful and empowering to work with, allowing them to remain creative and effective while eliminating the mundane complexities of working with massive amounts of data.

The Tinybird workflow is surprisingly bare and seemingly simple. Data can be ingested easily and in realtime using native connectors for multiple sources. The platform does the heavy lifting of provisioning, maintaining, and scaling the data infrastructure needed to support low-latency and high-concurrency. Transforming and enriching that data, in an interface that the platform terms Pipes, is achieved with simple SQL.

But the unique magic of Tinybird is how then these SQL queries can be published as fully-documented, low-latency APIs in a single click. With this framework, developers can build proofs of concept with just a few dozen lines of code, and even in highly complex production environments with many different data sources and transformations, the platform shines in its simplicity and ability to remove friction from the paths developers follow to push their products to market.

Use-cases for Tinybird range from real-time financial processing, to usage-based billing, to eCommerce personalization, to anomaly detection and log analysis, to user-facing analytics dashboards that update as soon as new data gets created. Any application that needs to analyze and process large amounts of data and present it to a user within an application is a good fit for the Tinybird platform.

In many ways, batch processing is an anomaly born of habit and expectations lowered by the realities of working with data warehouses. But it need not be the case. Companies that know the value of the data they collect and want to leverage it in their products and user experiences see Tinybird as a catalyst for this development.

You can read the documentation and see the code examples for yourself, and sign up for free to try the service. Tinybird is winning over companies that really do want to build applications that, to borrow a phrase, move at the speed of data.

The post Developing analytical applications at the speed of data with Tinybird appeared first on TechHQ.

]]>
Business Intelligence thought leaders: Ken Kuek of InterSystems https://techhq.com/2022/10/business-intelligence-thought-leaders-ken-kuek-intersystems-iris/ Fri, 28 Oct 2022 13:03:19 +0000 https://techhq.com/?p=219039

There’s a great deal of hyperbole in every area of the press, and technologists are no different. Even on these pages, we’re wont to talk about business intelligence platforms and the deluges of data available to most organizations of any size that can accrue information almost as a matter of course. So it’s worth paying... Read more »

The post Business Intelligence thought leaders: Ken Kuek of InterSystems appeared first on TechHQ.

]]>

There’s a great deal of hyperbole in every area of the press, and technologists are no different. Even on these pages, we’re wont to talk about business intelligence platforms and the deluges of data available to most organizations of any size that can accrue information almost as a matter of course.

So it’s worth paying attention to industry figures working day in, day out in the data industry. People at the coal face of the new technological revolution tend to be more pragmatic, and one such is Kenneth Kuek of InterSystems. He’s the Business Development Director at that company.

“Now people are smart,” he said, speaking to us exclusively last month. “They think that ‘Oh, I don’t need that amount of data; let’s choose data that [we] are able to interact, to better make use of it, and of course, use machine learning and AI to achieve better outcomes.'”

InterSystems specializes in two areas where there’s data to be mined and definite positive outcomes from proper process and treatment of that data: the finance and medical sectors. In the latter case, patient and pathological data is rapidly becoming entirely digitized. “In healthcare, we are actually able to produce analytics, not only in the application layer but also in the data layer. We’re able to wrangle the data, for example, for the researchers’ analytics, [and] for the healthcare worker to understand, to digest data, and produce detailed reports,” Ken said.

Smart Business Intelligence Platforms

Where the uninitiated read ‘AI,’ there is often a misconception about that technology’s abilities. It’s not a magic wand that an organization can wave over its collected data and suddenly be presented with meaningful insights. “So, a lot of people are trying to sell AI [or] machine learning services. But it is not that straightforward. Yeah, it’s not, ‘I have two terabytes of data and I’m just going to throw [that] into your analytic system, and I’m going to get the result that I want.’ To understand the output, or the needed outcome, is most important.”

Even a company that offers highly advanced machine learning as-a-service appreciates that data, in its raw forms, needs significant treatment and consideration before ML (machine learning) models can be applied.

The Need for Data Science Jobs

“We still need data scientists to come in to provide the parameters, depending on what data sources are wanted. […] We render the data in order to make [it] cleaner and very easy for the data scientists to apply the parameters and output to the reports the user expects. So it’s not that you engage [directly] in the system; you subscribe to our IRIS data platform. And, we still need professionals like data scientists to draw the perimeters: this is something very important.”

Any company or organization investing in processing its data resources can sign up for a service like InterSystems’ IRIS. And many prefer to ‘roll their own’ from the open-source libraries that are freely available. But even as those frameworks increase in power, thanks to the thousands of contributors, Ken Kuek is pretty sure of InterSystems’ longevity: “I think a mature data platform like the IRIS system will still have a very, very strong foothold,” he said.

Even users of IRIS and similar cloud-based, pay-as-you-go AI services will still need dedicated data scientists to make sense of data sets, find the necessary silos of information and ensure their veracity. But as in most areas of business, it comes down to return on investment, buck for buck. Ken asserts that InterSystems’ solutions are more scalable, reliable, and therefore more effective than the immediate competition or the manually pieced-together solution.

The post Business Intelligence thought leaders: Ken Kuek of InterSystems appeared first on TechHQ.

]]>
The tools to manage unstructured data https://techhq.com/2022/10/the-tools-to-manage-unstructured-data/ Tue, 18 Oct 2022 22:24:52 +0000 https://techhq.com/?p=218840

In Part 1 of this article, we explored the nature and the scale of the challenge businesses face with the rise and rise of unstructured data. We sat down with Krishna Subramanian, President at Komprise – a specialist in unstructured data management – to explore that challenge. While we had her in the chair, we... Read more »

The post The tools to manage unstructured data appeared first on TechHQ.

]]>

In Part 1 of this article, we explored the nature and the scale of the challenge businesses face with the rise and rise of unstructured data. We sat down with Krishna Subramanian, President at Komprise – a specialist in unstructured data management – to explore that challenge.

While we had her in the chair, we asked Krishna to expand on the ways to turn the relative unknown chaos of unstructured data into the order of managed unstructured data – the principles at work, and the tools that made it possible.

THQ:

You mentioned that the problems and opportunities of unstructured data had taken the business world by surprise.

KS:

Yes, everyone was looking at how to deal with structured data as it grew and grew, and now suddenly there’s a new data problem to tackle. Everything we hadn’t exactly realized was data because it didn’t fit in a database – suddenly it’s important.

THQ:

We discussed the importance of having data visibility in Part 1. We imagine that’s the first tool you need, the tool that shows you what you have and where it is. And you mentioned that there was an issue with moving data?

KS:

Yes, that was a big finding from the survey we ran on data management. Around 43% of companies are trying to move data to the cloud. How do you do that without disrupting either the company’s operation or the user’s experience?

We mentioned the real-world example of hosting all your photos on your cellphone, and the potential of instead, hosting them all in low-cost cloud storage in a way that still, to you the cellphone owner, looked and felt as though you were accessing them directly on your phone – but which didn’t cost you storage space on your phone. Imagine that on a company-wide scale, and that’s the sort of tool you need to build – something that can pull that off without anyone noticing except the CFO, who sees the reduced storage bill.

The trick is to move the data to its optimal storage location without disrupting users. Because if they know about it, if it impinges on their experience, they will resist. They won’t want their data moved, and they certainly don’t want all their applications to suddenly start breaking. They don’t want to go looking for a particular file, and suddenly not be able to find it because you’ve moved it to somewhere else.

THQ:

And they especially won’t want it moved as soon as you say you have to move it.

KS:

Right, exactly. So you need a transparent kind of moving process where there is no disruption to what the users are doing. So it looks like it’s still local, but it can actually be sitting somewhere else. Right.

THQ:

So, talk to us about the suite of tools we actually need to manage unstructured data.

KS:

For decades, there have been lots of products built for structured data. There’s a whole variety of tools to analyze structured data, to sort and place structured data in the right place, to run data lakes on structured data. We need similar tools for unstructured data.

People have always thought of unstructured data as a storage problem. “Oh, I’ll just buy the cheapest storage I can, and that will take care of it.”

But now, the data volumes are too large. It’s gone beyond being a storage problem. Realizing that data management needs actual data management tools for unstructured data is the first step. And those tools have to give you visibility.

Analysis is an element of it. Can you assess what’s in these environments? Can you give visibility? Can you help someone plan? Then there’s the question of whether you can deliver policy-based automated data movement, so you don’t have to babysit the solution. You need to be able to just say “I want data moved here, and I want its lifecycle managed like this. So you need data analysis tools, data mobilization tools, and then data extraction tools, because ultimately, why are you keeping all this data around?

Again, in our survey, 43% of respondents said they want to give more self-service to their departmental users for unstructured data. And if your unstructured data is a mess, your users probably don’t even know it’s there. So how do you make it easier for people to search and call and find the data that’s interesting, and then use it in a big data application or AI or ML application, so you can monetize this data better?

Those are the different ways in which unstructured data management is evolving.

THQ:

How does one get visibility on a data issue of this size, which continues to grow?

KS:

You need to have a standards-based solution. Storage environments all speak some common languages these days. There are file languages like NFS and SMB, and there are object languages like Amazon S3.

So, if your tools can talk to various storage environments in common languages, they can look at what’s inside those environments, and give analysis. And if you can do that, then you don’t need a proprietary solution for every environment. You can have an independent solution that works with your whole data center and your cloud accounts.

That should show you how much data you have, how much is hot, how much is cold, who’s using it, all those things. The information is there in the metadata of all these files and objects, but you need a query engine that can look up from this environment. That’s how you can solve that problem.

THQ:

Which is what you do.

KS:

Which is what we do.

It’s about taking the chaos and giving the customer control. They can say “Here are my data centers, and here are my cloud accounts,” and we will find all the storage environments, find the data that’s sitting in them, organize it by who has it, how fast it’s growing and so on.

And then the customer gets to set policies. Anything over three years old, maybe write a policy that it goes to Amazon Glacier. Anything that’s really hot and important, write a policy to put it on your most expensive flash storage, so people can really get value out of it. Other, less important data, you can write a policy to put it into standard long-term storage. You set these policies, we move the data according to your policies so it’s in the right place, and we move it transparently.

THQ:

So what you’re essentially doing is taking a massive data problem and creating a new data architecture, governed by the policies that the client wants, once they’re aware of all the unstructured data they own?

KS:

Exactly. We’re enabling them to non-disruptively evolve their data architecture.

THQ:

What do you think are is the overall mood of the market when it comes to unstructured data?

KS:

I think it’s a very exciting time in the market. Because even though this problem kind of crept up on people, there is a lot of innovation happening in terms of how to solve it. And for me, what’s most exciting is AI, because AI and ML actually require unstructured data, not structured data. To have really advanced AI or ML you need unstructured data management, because you have to bring unstructured data into those systems. So it feels as though there are multiple market forces at play that make this a very exciting area of innovation.

THQ:

An unstructured data gold rush, even.

KS:

Yes, it is. It really is.

The post The tools to manage unstructured data appeared first on TechHQ.

]]>
The challenges of unstructured data https://techhq.com/2022/10/unstructured-data-management-challenges/ Tue, 18 Oct 2022 20:49:20 +0000 https://techhq.com/?p=218835

The business world doesn’t like to be surprised – especially not by extremely complex, potentially expensive problems. Which is why the problem of unstructured data is so fundamentally vexing. Unstructured data is an already huge amount of data in most medium-to-large companies, and it’s getting bigger every day. Knowing about it, categorizing it, storing it,... Read more »

The post The challenges of unstructured data appeared first on TechHQ.

]]>

The business world doesn’t like to be surprised – especially not by extremely complex, potentially expensive problems. Which is why the problem of unstructured data is so fundamentally vexing. Unstructured data is an already huge amount of data in most medium-to-large companies, and it’s getting bigger every day. Knowing about it, categorizing it, storing it, and even eventually monetizing it is a whole sequence of enormous headaches that on the one hand, business could do without. But if you get all that right, it can prove to be like finding gold in your basement – so you can’t afford to just sweep unstructured data under your carpet.

We sat down with Krishna Subramanian, President at Komprise – a company that specializes in unstructured data management, to explore the scale of the challenge.

THQ:

So – what are the challenges that companies are facing with their unstructured data?

KS:

Well, 85% of the data in the world today is unstructured data. That’s all the data that doesn’t sit in a database somewhere – everything from photos on your phone to X-rays in your medical records, to videos on TikTok to genomic sequence files. And it wasn’t always this way – if it had been, we’d have had protocols in place for it long before now.

Even ten years ago, people wouldn’t have thought of unstructured data as data per se, and there was vastly less of it about. It’s grown very rapidly in the age of the smartphone, the cloud, and the ubiquitous video.

That means it’s caught everybody off guard.

THQ:
So it’s a problem of suddenness and scale?

KS:
To some extent yes. We ran a survey of practitioners last year, and we ran it again this year, and even between the two, the difference is startling. This year, more than half the respondents said they’re dealing with over five petabytes of data. And a petabyte is 1,024 TB.

THQ:
So, say 500 laptops full. To the brim.

KS:

Yeah. Certainly, it’s the equivalent of around 10 billion photos. Per organization.

So to us, that’s the first and biggest challenge of unstructured data – the rapid growth of the issue.

The second big issue begins a kind of domino effect. Because companies don’t know what the data is. Or where it is. Or how many times they have the same data. Or what data’s important. Or what could be hot for monetization, and what’s cold. In fact, most companies are still treating all their unstructured data as though it’s the same sort of data, when it clearly isn’t. But – and here’s the crunch – if you don’t know what you have and where it is, you have to treat it all the same, because you don’t know what might be important.

Which is why lots of companies (68% of our survey respondents) are spending over 30% of their IT budget on unstructured data management. Relatively suddenly – again, this wasn’t an issue a handful of years ago. If you’re an IT department trying to fend off a growing cybersecurity threat, and 30% of your budget is suddenly being siphoned off to manage your unstructured data, probably fairly poorly, your company’s in potential trouble.

THQ:

The double jeopardy of a rising data management threat and potentially underfunding your IT department in everything else it’s called on to do.

KS:

Right.

So all of that together is really the biggest problem. How do you manage the root of this data? Without treating all this data as if it’s the same, and importantly, without interrupting the user experience.

THQ:

Okay – how do you do that?

KS:

It all starts with visibility. It’s very difficult to solve a problem you can’t see, or can’t fully understand. So, step 1 – find out what you have.

THQ:

As easy as that?

KS:

As difficult as that. Because unstructured data is piling up in many different places, all the time. And so it’s very hard to get an inventory of it, because it might be sitting in different storage systems, it might be sitting inside applications, behind the application, it might be piling up in the cloud.

So the first thing you need is something that can give you visibility across all the silos, and show you exactly how much unstructured data your organization has, how fast it’s growing, what is hot and important, what people are actually using, and what data is cold.

Because that’s the real problem with treating all unstructured data as if it’s hot. 80% of it is actually cold. Think about it in your own life. You probably take a lot of videos of your kids or pictures on your cellphone. We all do that these days. But how often do you go and look at every photo? You don’t look at all of them all the time. A lot of that data is cold data.

So, it could be better managed. What if it could sit in a local storage in the cloud, but look like it’s still on your phone? You can see the thumbnail on your phone, and whenever you want it to click on it, you can get it, but it’s not eating up all the storage on your phone, right? It’s the same thing for companies.

So knowing what you have, knowing what’s important and what isn’t, knowing what’s being used and what isn’t and how much it’s costing you is the first step.

The second step to solving unstructured data management is to mobilize your data efficiently. Unstructured data is so big a ‘thing,’ and it’s of so many types of files, and so many sizes, moving it from one place to another isn’t all that easy. Doing it manually is in no way time-efficient – it would build up faster than you could move it. So you need some sort of automated process to do it for you, that can adapt to any of the multiple networks you might be using in the silos where your unstructured data is stored. That’s a step in the right direction towards data mobilization.

What about security? 80% of your unstructured data may be cold data as far as you’re concerned, but it’s still your data – you want it secured, so that no-one else might be able to steal it and potentially monetize it in ways you’ve never imagined.

THQ:

So the bottom line is, you need a set of automated data management tools to be able to deal with unstructured data even remotely effectively in 2022?

KS:

We think so. That’s why we built some.

 

In Part 2 of this article, we’ll dig deeper into the complexities of using the right tools to handle your unstructured data – and essentially, how to sell the expense of unstructured data management to the C-Suite.

The post The challenges of unstructured data appeared first on TechHQ.

]]>
Bringing data science to the boardroom https://techhq.com/2022/10/business-intelligence-platforms-analyst-software-tools/ Tue, 18 Oct 2022 10:16:50 +0000 https://techhq.com/?p=218796

Despite the frequently-listed advantages of data-led projects as presented by in-house data teams (or data insight system vendors), it’s often very difficult for a CEO to justify the required investment for large data mining and business intelligence projects. Business intelligence platforms are many and varied and may promise immediate results, but what exactly can data... Read more »

The post Bringing data science to the boardroom appeared first on TechHQ.

]]>

Despite the frequently-listed advantages of data-led projects as presented by in-house data teams (or data insight system vendors), it’s often very difficult for a CEO to justify the required investment for large data mining and business intelligence projects. Business intelligence platforms are many and varied and may promise immediate results, but what exactly can data sources show to the C-Suite (and the CEO particularly) that they don’t already know?

Many CEOs and boards have had their fingers burned in the past, with data scientists promising big but then either under-delivering, taking upward of a year to produce meaningful results, or worst of all, taking many months to produce nothing relevant. Sometimes too, data project leaders claim after the fact that they were “asked the wrong question” by decision-makers at a project’s inception. So why are data projects so slow, costly, and seemingly of so little value?

If we assume that there is a seam of digital gold in the mass of data that most organizations accrue daily, how can we realize better value from it? And more pertinently, how can real value be identified and presented more quickly (think weeks or days, not months or years) without throwing the equivalent of a small country’s GDP at a group of individuals who seem to have little notion of how a business works?

Think about vocabulary & education

Any executive-level decision-maker has had highly specialized training throughout their career. They have been promoted (hopefully) based on their abilities and are rewarded commensurately. They will know, in the case of the CEO, how the organization works and its medium to long-term goals. Specialist C-suite members likewise have shown talent and fortitude to get to where they are, with a good deal of operational experience thrown into the mix in finance, IT, marketing, and so on. Most strategic roles in today’s businesses have existed in one form or another for many decades, if not hundreds of years.

In contrast, data science as a discipline has only existed for a few years. We began to hear about “big data” perhaps ten years ago. The discipline of data processing, and gathering insight from raw information was, until quite recently, an academic exercise practiced at research institutions attached to universities.

Small wonder, then, that there is a mismatch between the language and motivations of the C-suite and a data analyst fresh out of higher education (or, at best, a more experienced IT professional who’s cross-trained in the “new” science of advanced data analytics). It’s a rare data scientist indeed that has proven data analysis and BI “chops,” yet who’s also conversant with how any company can proceed from A to B in terms of its strategy, sales, operations, marketing, and so on. The converse is also true: today’s CEO or CFO might be capable of wrangling complex Excel sheets but unable to build abstract data models.

It’s important, therefore, to cross-educate between business-focused groups and data analytics teams, whether in-house or outsourced. Without common ground, any insight worth drawing from data will be missed. Business Intelligence Platforms (BIPs) purport to help solve the mismatch between business focus and pure data. But there’s also the issue of approach.

Forming continuously collaborative working groups is imperative to this cross-education. Your CIO or CTO can confirm that the best way to produce any new product is by using an Agile methodology, where collaboration between users and software developers (for instance) has to be continuous to keep the project on track. The smart CEO will listen to them.

The same is invariably true when data insights are needed to inform business strategy. Because businesses change so quickly, the agility necessary can only come from dedicated working groups comprising multiple stakeholders working on the same problems as close to full-time as possible. As the team(s) mature, so the time-to-value will decrease, an effect largely down to each stakeholder group becoming more conversant with the requirements of the other(s).

Picking the business intelligence platform

As a relatively new concept, commercial platforms that support business-focused data teams are largely immature. Several platforms from enterprise-scale names such as Oracle and IBM represent many years of development and business/technology crossover. What quickly becomes apparent is that these platforms require extensive alterations to fit the use cases of their users, and such attenuation won’t come cheap.

At the other end of the scale are the many open-source data or business intelligence platforms that can be used as the building blocks to deliver a similarly bespoke solution. It is a well-accepted maxim that at least 60% of every proprietary piece of software available today comprises open-source elements. But building a data analytics and business intelligence solution from the ground up (albeit with as much open-source functionality as possible) is also massively costly. Software developers neither come cheap nor do they (often) come with the necessary specializations in data science. The software developer with experience in R (commonly used in statistical data analysis), Python, and advanced database administration is, like our data scientist-cum-business-professional, a rare beast.

We regularly feature and review data management, analysis, processing, and presentation platforms that are the market leaders in this emerging sector. Here though, we’re highlighting a business’s priorities when choosing an eventual partner (if third-party partnerships are considered appropriate).

Keep it bite-size

Throughout the Data Analysis and Business Intelligence spotlight series, our reporters have spoken to many thought-leaders in this area of next-generation, data-driven business intelligence. Without exception, all these leaders have advised that any data project start small, with very particular outcomes stipulated from the outset. Given that to keep any data science project on the right track, there’s a high degree of every stakeholder “finding their feet,” it’s essential that projects be small and discrete, at least to begin with.

The timescale of any project should also be kept short, partly to dial down expectations with a smaller scope, but also to ensure there is no “mission creep” over a year or more. The cadence of quick time-to-value needs to be instigated as soon as possible and developed as a working ethos for any data project group.

Not-so-great expectations

There tends to be a natural mismatch between the mentalities of a pure-bred data science professional and an MBA-touting CEO. The latter is often guilty of impractical ideas of timescales and, therefore, needs to consider where value can be realized in the medium term, rather than in just a few weeks.

Conversely, the academic nature of many data teams’ backgrounds means that professionals will find and appreciate the beauty of their models and the data itself. While intellectual rigor is (almost) always celebrated, the focus on outcomes must be constant and consistent. Data specialists may wish, for example, to refine models and data transformation routines so that there are negligible outliers in data sets. But like any Agile project, the focus should always be on viable products. Business intelligence platforms may be the common ground where short-term projects come to fruition, and longer-term, data-based startegies begin to appear.

All stakeholders have to be willing to collaborate, and collaboration requires compromise. There will be shifts in emphasis as teams evolve, but for short-term, small-scope projects, setting out the style of working together will need to play an important part of the day-to-day work.

The post Bringing data science to the boardroom appeared first on TechHQ.

]]>
API trend allows next-gen data platforms to prosper https://techhq.com/2022/10/apis-provide-app-like-future-for-data-platforms/ Fri, 07 Oct 2022 07:03:08 +0000 https://techhq.com/?p=218528

Many of today’s workplace tasks would be impossible without having data and analytics at our fingertips. Information takes us to the answers, and that’s led to a huge rise in the use of data software and business intelligence analytics. But older products can soon become siloed if their features are unable to adapt to modern... Read more »

The post API trend allows next-gen data platforms to prosper appeared first on TechHQ.

]]>

Many of today’s workplace tasks would be impossible without having data and analytics at our fingertips. Information takes us to the answers, and that’s led to a huge rise in the use of data software and business intelligence analytics. But older products can soon become siloed if their features are unable to adapt to modern marketplaces. As multiple industries converge, the tools need to follow – to keep the data flowing, to let ideas prosper, and to enable good governance from the same trust points.

Design, make, use

Data platforms have become pillars that support sales activity across all departments and keep the wheels of commerce turning. Configurations allow firms to connect cost estimation, purchasing, bidding, and all of the other essential information feeds in the chain. Teams can pull data all the way from making a pitch through to servicing the sale and renewing business. As Jim Quanci – Senior Director of Autodesk Forge Partner Development – explains, unified data models help on the bottom line by giving sales agents the right pricing. And on the top line, having well-structured, unified information translates into winning more deals. Better data builds trust and that allows businesses to land bigger contracts. And it’s why more companies are seeking information platforms in the cloud that allow them to be truly ‘data-driven’.

Cloud-based models help design teams too. Wisdom that would otherwise be wrapped up in separate projects is now easy for workers to access securely and digest – to forecast costings more accurately, help with production time estimates, and collaborate on creative features. Successful platforms make it straightforward to link the people who need the information with the people that have the information – eliminating the need for multiple emails, spreadsheets, and other digital patchwork that brings delays. Being able to share data effectively helps firms to make solid plans and puts them in a stronger position to hit their targets and deliver on strategy. “Forge is the platform for how we do that,” comments Ben Cochran, Senior Director of Engineering, Developer Enablement at Autodesk.

App-like future

Once users get a glimpse of this new, more streamlined, responsive future, they want to run with their own ideas and build on those services. And just as smartphones and tablets have blossomed thanks to app-based ecosystems, central data platforms benefit from a rich set of API’s that give customers analogous building blocks for creating new viewing experiences using their data.

“Participants from various teams built creative projects that we might not have ever built ourselves,” said Shwetha Nagaraja of the Forge product team, who’s been active in organizing workshops where developers can brainstorm, collaborate, and road test ideas. “All of these examples ended up being productized.” Key pieces of the information puzzle can now be wired together, which includes data management, reality capture, information viewing, design automation, modelling, and more. “Forge is a set of web service APIs and allows customers to move ‘design make data’ across an enterprise,” adds Quanci. “They allow customers to do a much better, faster, easier, lower cost job.”

New mindset

“As we move from becoming a product company to a platform company, we’re making our software, our IP available to far more people, and they will do amazing things with it,” said Susanna Holt, VP of Forge. Users will be able to explore this interconnected future without loss of data fidelity, and bypassing the often tedious intervention that crops up when systems won’t talk to each other. A familiar hurdle, the downtime associated with traditional tools tends to occur when customer inspiration conflicts with the original product specification. And users trying to explore new data avenues beyond the original feature set become frustrated. With a regular data product, the business intelligence possibilities are locked in, but API-driven platforms break down those barriers and enable customers to pull in new information feeds. This flexibility allows developers to create visualizations that are much more closely tailored to their requirements and can adapt and evolve as projects grow.

“You used to be able to do what the tool allowed you to do, but now we’re blowing that all out and giving the people the space to be much more creative,” said Holt. The trend in the business intelligence sector backs this bold move. While conventional centralized data platforms have their merits, it’s clear that customers want more. The first wave of digital transformation has given users years of experience in working with data; they can see the possibilities and want to be in a position where it’s possible for them to bend and shape those digital tools to meet their needs.

Browsing the web, you don’t have to search far to find expressions such as ‘dash mesh’ and ‘decentralized data systems’, and cloud-based developer services such as Autodesk’s Forge platform play to this forward-looking trend.

The post API trend allows next-gen data platforms to prosper appeared first on TechHQ.

]]>
Vectra AI and threat mitigation, part 2 https://techhq.com/2022/10/vectra-ai-and-threat-mitigation-part-2/ Thu, 06 Oct 2022 13:41:07 +0000 https://techhq.com/?p=218562

In Part 1 of this article, we asked Mark Wojtasiak, VP Product Strategy at Vectra AI, how the attack surface for cybercriminals was significantly different in multicloud environments to monocloud or non-cloud environments, and how companies could get out in front of attackers who had very many more ways to attack them in these newer... Read more »

The post Vectra AI and threat mitigation, part 2 appeared first on TechHQ.

]]>

In Part 1 of this article, we asked Mark Wojtasiak, VP Product Strategy at Vectra AI, how the attack surface for cybercriminals was significantly different in multicloud environments to monocloud or non-cloud environments, and how companies could get out in front of attackers who had very many more ways to attack them in these newer environments. In Part 2, we asked Mark to explain particularly the role that AI could play in aiding this fight against multicloud attackers.

THQ:

You said AI could free human analysts to focus on big attacks in the multicloud environment. How can it do that?

The philosophy of more

MW:

AI can help by automating the mundane manual tasks out of the analyst’s workflow. Right now, analysts are facing regular burnout, and there aren’t enough of them in the world to adequately monitor all the big threats and the small threats that are out there. One of the key things we need to embrace is the need to reframe and simplify the way we look at these problems, because trying to “do more with more” is simply not viable.

Think about it. There are any number of attackers out there in the world, and they’re all fairly smart at what they do, and highly motivated. If we try to match more attackers with more work for analysts, all we’re going to do is burn out more analysts more quickly, which ultimately leaves more systems vulnerable to attack.

THQ:

And that’s how you lose a data war.

MW:

Exactly. So what we need to do is reframe and simplify, and actually reduce the amount of mundane work on the analysts’ shoulders. That’s where AI can come in.

AI can do the mundane monitoring. It can track user behaviors against known threat behavior models, and it can flag up behaviors to analysts when and only when it looks like becoming something dangerous. Then the analysts can step in and do what humans do best – critical thinking – and can use all the available information on the behavior of a particular invader or attacker, to either shuffle them down electronic alleyways and out of the system, or just boot them out if the danger is imminent. By using AI as an underpinning technology, we can free up analysts to do the important human work, reduce analyst burnout, and create an AI-human partnership that’s hopefully better than the sum of its parts. Of course, that means a shared responsibility between the artificial intelligence side of the equation and the human analysts involved, and the question becomes whether AI can actually do this at scale, and at the speed of an attack.

And what we’ve found is that it can.

Heart and minds

THQ:

Is there a battle for hearts and minds to get that sort of shared responsibility in place?

MW:

Yes, I think so. I’ve been asking security leaders and practitioners and architect what one word sums up the picture of security – and it’s “More.” Everything’s about more, there’s more attack surface, there’s more evasive attackers, there’s more technology, there are more tools. So you’ve got to write more rules, you’ve got to pump more data into a sim, you’ve got to do more analytics, you got to do more. It’s always more.

And I think that, fundamentally, this whole approach of doing more is just making the problem worse, making it more difficult, and creating the burnout, etc. So how do you fundamentally deliver more to the organization, but by doing less?

THQ:

How indeed?

MW:

It’s a big challenge that I think a lot of CISOs have, and you see it in questions like “How do we report to the board? How do we talk about cybersecurity to the C-suite and to non-security people?”

And I think everything’s got to be risk-based. Which is similar to everything being outcome-based. What is the ultimate outcome we want here?

THQ:

To fight more attackers, with less human burnout?

MW:

Exactly. And being a defender in this day and age is not easy. It’s extremely difficult and it’s getting worse and worse because of this whole “more” dynamic, so as a technology partner, there are certain things that we have to do. How do you help businesses get more resilient to attack in a hybrid or multicloud world?

It comes back to those two questions – where are they exposed? And where are we compromised right now? If we can answer those questions, we can be resilient to these attacks as they’re evolving. The second thing is how do you help them be more efficient? How do you help them do more with less?

Helping analysts be heroes

And then third question is where is the outcome rooted in efficacy? Analysts just want to be effective at their job, they just want to come in, they want to know what they need to focus on. They don’t want to do manual mundane tasks on a day-to-day basis, like triaging alerts or maintaining rules, or whatever it might be, they just want to defend. So how do you begin? How do we help an analyst effectively defend their organization in a hybrid multicloud world? We believe if you take all of that mundane stuff out of their day to day, and let the AI handle it, you free them up to do the critical thinking that lets them think like an attacker, and so defeat the attackers that are out there, trying to get in.

That’s how we go about it – we create the AI system that can monitor attack behaviors, deal with the day-today triage and alerting, so that the analysts is free to be an analyst, and doesn’t get burned out with all the constant pressure to “do more.” We let them do less, but more effectively. We let them find their opportunity to add value to the organization as analysts, rather than as glorified watchdogs.”

That also means they can evolve their own learning as they go, and feed it back in to a streamlining process, feeding best practice into their peers – and processes.”

THQ:

That sounds like an uphill budgetary struggle – convincing the board that they need to spend more to let their analysts do less, when the number and attack landscape of multicloud threats is only ever increasing?

MW:

Explaining it to the CISO, or whoever holds the budget, is where the process currently falls down sometimes, yes. The explanation of “Here’s where we have delays, so we need to invest here, whether it’s process, whether it’s people, whether it’s tech, whatever it might be” still sometimes feels like the hard work. But if we’re able to do our job, if we can help the analysts help the business be resilient in a multicloud environment, I believe we can turn the tables on the attackers. I believe we can get ahead of this, and let the analysts be the heroes they can be when they’re not overburdened the “do more” philosophy. If we don’t, we’ll just see more and more analysts burn out.

And that’s good for no-one but the attackers.

The post Vectra AI and threat mitigation, part 2 appeared first on TechHQ.

]]>
The business windfalls of data intelligence integration https://techhq.com/2022/10/the-business-windfalls-of-data-intelligence-integration/ Thu, 06 Oct 2022 13:22:56 +0000 https://techhq.com/?p=218555

It’s the age of information, especially for businesses which can use it to optimize efficiencies and drive forward-looking decision-making. Data integration into various operational capacities keeps a steady stream of insights percolating to push continuous growth, and inform costs and operational oversight. For IT teams, the volume(s) of data coming through the integration pipeline can... Read more »

The post The business windfalls of data intelligence integration appeared first on TechHQ.

]]>

It’s the age of information, especially for businesses which can use it to optimize efficiencies and drive forward-looking decision-making. Data integration into various operational capacities keeps a steady stream of insights percolating to push continuous growth, and inform costs and operational oversight.

For IT teams, the volume(s) of data coming through the integration pipeline can place greater demands on their time and resources to manage, but the value of data findings to enable innovation and business resilience within the organization can ultimately be beneficial to the bottom line, as illustrated in a new study, the Economic Impact of Data Innovation 2023, released by Splunk in collaboration with the Enterprise Strategy Group.

The amount of data produced and expected to be consumed, (predicted to be a mind-boggling 94 zettabytes of data by the end of 2022) is causing massive headaches for businesses. The global survey studied feedback from 2,000 IT, security, and business leaders throughout the United States, United Kingdom, New Zealand, Australia, Singapore, Japan, France, Germany, and India to better understand how data integration is being operationalized and monetized to reap economic gains.

Is it worth the effort?

Is the data integration journey really worth the investment and operational overhaul it would need for most enterprises to undertake? According to the Economic Impact report, the average business witnessed a 9.5% increase in gross profits, with the leaders notifying that the data-mature businesses launched an average of nine new products annually, in contrast with other organizations that released an average of just three new products per year.

More interestingly, companies with higher levels of data integration were applying it to wider areas of the business such as the sales, marketing and customer servicing funnels, contributing to a higher customer retention value, 19% more than beginner firms just starting out integrating data into their operations. Crucially when quantifying the economic value that insights bring, data integration leaders are nearly six times more likely to say their organization makes better operational decisions than competitors, most of the time.

Companies on the mature end of the data integration curve are nearly three times (2.9x) more likely to beat the competition when releasing a new product to market, and have double the chance to surpass financial benchmarks with the aid of innovative data value. Information analysis breeds confidence in the workforce of data integration leaders too: IT and business people surveyed are 4.5 times more likely to believe their firm is in a strong position to stand out and flourish in their competitive spaces over the next few years.

Data innovation-driven outfits feel more pressure than their less data-mature rivals, but with that pressure also comes higher degrees of achievement. Two-thirds (67%) of data integration pacesetters report feeling a high degree of pressure, compared to just 41% of intermediate data-mature companies and a paltry 15% of beginners.

The benefit of data-maturity

“I think what was really interesting was just how impactful some of the metrics were around the leaders—or the data-savvy—versus the ones that are just starting out,” commented Ammar Maraqa, Chief Strategy Officer at Splunk. “There’s real financial impact or financial benefit, and it’s both on the top line and the bottom line.”

When it comes to a focused drive to enhance performance and increase profit, the report makes clear that navigating the unique challenges and even predicting the most advisable course of action is significantly improved by analysis-driven insights integrated across the business.

“Data-driven innovation gives you a massive edge,” noted Maraqa. “Organizations that prioritize investments in collecting and using their data have full visibility into their digital systems and business performance, which makes it easier to adapt and respond to disruptions, security threats and changing market conditions.”

The post The business windfalls of data intelligence integration appeared first on TechHQ.

]]>