Data Management

DYNIZER: The Complete Framework

Published

The Rise of Narrative Data

Whatever the benefit you feel you are getting from your data, it's probably just the tip of the iceberg.

Iceberg is a good analogy, because what you can see, and use is probably only a fraction of what's hidden below the waterline. Think as much as 20% versus 80%.

What's visible is, often, structured data, the kind of row/column information you find in spreadsheets and relational databases.

Most data are what's generally described as 'unstructured'. What that really means is the narrative information that tells us the story in the data. It is more than just the facts and entities contained in structured relational data.

The narrative is the rich seam of data gold that holds the structure together, brings context and tone of voice, and makes automatic connections between the quantitative elements and the qualitative actions which turn a million-dollar asset into a billion-dollar bonus.

So, how do you turn the iceberg upside down?

Do the data right to gain added value

What comes first – the algorithm or the data?

That’s the challenge facing many who seek to use AI and NLP to provide efficient answers to complex information management issues. Operationally, there can only be one answer, because what use is the algorithm if you don’t respect the data? And the best way to do that is to do the data right in the first place.

Too often, data management solutions find themselves trapped in an unproductive cycle where new models are being created for every use case. This is time consuming and expensive. It also increases the risk of falling foul of compliance regulation because it reduces the ability to maintain ethics and mitigate bias.

The solution is to build one consistent base infrastructure so there is no need to keep reinventing the model and rebuilding the structure. Making the basics reusable is more efficient, more effective, and brings a greater benefit by exploiting the additional elements revealed by AI and NLP pipelines in the Dynizer. It reduces the time to information and cuts the cost of data processing.

Doing the data right simplifies the objective for users and specialists alike, because if business users understand the data better, they can tell AI specialists what they want out of it. Then it becomes easier for the specialists to give them what they want.

Make use of the ‘missing’ 80%

Narrative data makes up nearly four times as much of your data as the structured data stored in rows and columns.

And yet, most companies will spend only 20% of their data resource in trying to find the meaning in the message because, “it's too hard, too complicated, too costly, too time-consuming”. The other 80% effort goes on the more easily extracted data, which is like settling for crumbs, when you could have nuggets.

Structure the narrative

It doesn't make sense to analyze documents as if they were just a collection of entities. You need to be able to structure the complete narrative because that’s where the added value of the document lives.

Semantics is the key that unlocks the door between narrative data and facts and figures. How? Essential to understanding all data are the four core elements of semantics: People, things, places, and moments in time. In other words, Who, What, Where, When.

In narrative data, these combine into sentences and paragraphs with the addition of How and Why, which provide context. If you can extract the Who, What, Where, and When you find the facts and figures in your narrative, and can use them together. This is what the Dynizer does.

Self-service data: Help yourself to insight

How often do the people who want to make use of data really have easy access to the tools that make that happen?

The chances are, not that often. More likely there are layers of management between the data and the end user which, if they’re not exactly barriers, still tend to slow down the process.

The Dynizer’s simple integration with technologies for document imports via bespoke pipelines and its connectivity to APIs for simple data querying and display via dashboards, as well as its own integrated Explorer, enable the end user to set up the data environment and gain that fast insight to the results.

There is no need to continually reinvent the process to run data projects successfully. All the speech-tagging, textual analysis, entity extraction, phrase dependency etc that is squeezed out of unstructured narrative data is stored for use in other projects.

It means users can help themselves to the information they need and leave the data scientists and data engineers to get on with the tricky technicals.

The technology shaping data’s evolution

The problems of dealing with ‘big’ data are clear:

There's too much of it now, and there will be much more tomorrow.

It's coming at you too fast.

It's coming at you from everywhere.

You can't control it.

Worse, you can't make use of it.

Relational databases on their own can't handle the volume and variety of data we're creating every day - and in truth they were never designed to in the first place.

Numerous technologies have been created to fill the gap, but every new Lake, Warehouse, Hub, Lakehouse, or Fabric only adds to the confusion.

Consono’s vision to reimagine the data landscape is now embodied in the Dynizer. No-one’s calling it a revolution, it’s an evolution. It’s not a total reorganization, it’s a progression towards a simpler, more incorporated, data landscape that means business users can serve themselves data in a way that suits them best.

As the next step in the evolution of data technologies, the Dynizer gives data back to the people who create it and the ones who need to be able to use it most efficiently. As a society we are creating more data every day while at the same time there are fewer people who have the expertise to manage it properly.

The Dynizer’s analytics makes extensive use of artificial intelligence and natural language processing based on a combination of industry standard components and Consono’s bespoke analytics methodology.

This helps free the technicians to give them the time and space to do their jobs.

The technology behind the Dynizer has been recognized for its uniqueness and novelty and is protected by patents EP 3 249 557 B1 in Europe and US 2017/0344634A1 in the United States of America.

The Dynizer Action Model

Much as unstructured narrative data combines into sentences which contain individual elements and the context that holds them together, so the Dynizer’s data model is based on entities and the Actions that link them.

The model identifies and formalizes not only the basic elements of a document, but also the elements that add value. It then makes all these available in a structured format so they can be used to improve quantitative analyses by automatically linking these qualitative narrative elements to structured data.

The model is represented in the Dynizer Base User Interface which manages the integration and analysis of unstructured narrative data by connecting the Who, What, Where, and When elements.

It enables users to thoroughly interrogate their data in the Explorer function and through queries based on Consono’s SQL compliant DQL query interface.

The Dynizer Base UI makes it simpler for users to manage metadata and to monitor and control data quality. Below are the features and functions of the Dynizer:

Generic Features for Narrative Data
  • Automated context-based link detection

  • Automated transformation of text

  • Specific formal combinations (Actions)

  • Identification of WWWW-entities

  • Multilingual

  • Semantic abstraction

Specific Features for Narrative Data
  • Role detection

  • Language and sentence detection

  • Document type identification

  • Tone of voice detection

  • Specific summarization

Features for Structured Data
  • Automated link and correlation detection

  • Easy mapping of different models

  • Integration with another database

  • Multiple datatypes in the same column

  • No need to predefine links

Features for Data Integration
  • Automated data model creation

  • Connectors for mainstream enterprise data

  • Data lineage information

  • Quick and easy-to-use data integration

  • Reusable mapping for different data

Features for Querying
  • Data less query resolution

  • Data-structure agnostic querying possible

  • Joins over repeating columns supported

  • Specific semantic functions for advanced and analytical queries

Features for Data Storage
  • Automated full deduplication of data

  • Small footprint

  • Full separation between model and data

Features for Metadata
  • Fully auditable database (who accessed what information from where at what point in time

  • Integrated CACTUS model for Data Quality Management Integrated Data Dictionary functionality

  • Integrated Data Lineage functionality

  • Integrated metadata that can be attached in the form of a key-value structure to every individual record or even to individual cells

Recent Posts

Michael's Book

Data Harmonization in the Key of C

Book Illustration