Knowledge Graph Workflow FAQs

How did the Knowledge Graph Workflow get its name?

Knowledge graphs (KGs) organise data from multiple sources, capture information about entities of interest in a given domain or task (like people, places or events), and forge connections between them.

We track people, events and ideas, and by using AI we are on a journey to attaching skills/ capabilities/ knowledge to every person in a companies ecosystem based on their content.

This journey will enable us to attach all skills and capabilities gained by users, and all skills/ capabilities that exist in experts heads, so it is searchable and can be visualised.

We want to be able to understand context of questions, personal language/ slang preferences and even acronyms used by customers that only make sense to them so the AI truly understands what people need, when they need it.

What technology does this workflow use?

The learning assistant uses the latest commercially available Large Language Models (LLMs), augmented with your IP.

Is my IP protected?

Yes. Your IP is our utmost priority and is protected in our vault. This vault locks in your IP via any content you upload and does not share it outside of your application. Your IP is not available to OpenAI to train their models (like ChatGPT).

Does it support other languages?

The Learning Assistant currently only ingests English media, but using the Strong Guidance Prompt configuration field you can train your Knowledge Assistant to converse in any global language. Even if you speak to it in English, the Knowledge Assistant will always reply in the language you have configured it to use.

How much of the information comes from ChatGPT vs content in the bank?

This is a topic that we are actively researching. The current knowledge assistant fares pretty well in basing its answers on the supplied information, especially if the users’ questions cover topics that are well represented in the supplied IP.

Last updated