top of page
Writer's pictureApta AI

How Apta Unlocks AI's True Potential

AI has advanced incredibly quickly over the last few years, driven by the advent of Large Language Models (LLMs) and their widespread consumer adoption. Models like Mistral, Llama3, GPT-3.5 and Gemini have revolutionized our interaction with technology, enabling sophisticated conversational agents with task-executing capabilities that are widely accessible by the public. Nevertheless, LLMs alone have fundamental limitations which hinder their ability to deliver precise and reliable information in many domains. Firstly and most critically, this is down to how LLMs work - they are language models that cannot reason or execute logic, instead they auto regressively predict the next token (a sub-word), one by one (see example below).



LLMs generate the answer one token at a time. The tokens are selected from a probability distribution over the vocabulary that is designed to mirror the flow of human language within the training data, i.e. internet content, rather than a logical reasoning framework. In the example above from GPT4-o, even though 3307 is a prime number, since LLMs do not reason, the word ‘no’ has a non-zero probability of being selected. After this first word is generated, there is no way back and the model will continue to generate a nonsensical explanation token-by-token. Ideally, the decision as to whether a number is prime should be made following a logical reasoning step such as prime factorization or searching through a database of prime numbers, rather than determining the answer by narrowing down the types of words that humans use following a similar question and randomly selecting one.


An emerging approach to overcome traditional shortcomings of LLMs is agentic flow, which is now used by a subset of LLMs like GPT4-o by ChatGPT. Agentic flow is where an LLM triggers external tools (agents) to answer questions that require logical reasoning or highly precise information. Examples of agents include those capable of simple mathematics, web search and other general applications.  However, due to the broad use cases of generalist LLMs, the agentic functions they interface with can only cover a set of simple tasks that have broad utility across many contexts i.e. simple mathematics. Problems arise when we want to use agentic flow to enable more complex functions that are highly domain specific e.g. to determine where a stock price lies relative to its 200 day moving average. To achieve this requires: an agentic function capable of executing this highly-domain specific task; and access to highly structured data to drive the agentic function i.e. a database of stock price time series data.  Even if these two requirements were to be fulfilled, another issue is that agentic flow currently operates in a reactive manner, i.e. on the fly, meaning the latency to produce an output would be too long for complex agentic functions. Currently, the majority of LLMs are stuck at the first level - they lack domain-specific agentic functions. As a result, when they do not have an appropriate agent available to answer more complex queries, they resort to generating probabilistic tokens one by one instead of relying on computation or logic.


Generalist LLMs cannot conduct domain-specific and complex agentic functions:

1. they lack specialized agentic functions capable of executing domain-specific tasks

2. they lack access to domain-specific data to drive these specialized agents

3.  they are reactive, leading to high latency if scaled to these tasks


Apta addresses these drawbacks by developing domain-specific agentic functions which are capable of executing complex functions that are frequently utilized in particular domains. Moreover, we pair this with highly structured data architectures that are required to enable the agentic functions. To address the issue of latency for complex agentic function calls, Apta is pioneering the development of pre-emptive agents, which can execute complex analytic functions within targeted domains with very low latency. 


Apta’s business model varies quite differently from the generalist large language companies. We develop co-pilots tailored to particular domains utilizing our pre-emptive agentic flow framework. This means we build vertical by vertical rather than developing a generalist all-in-one system. Apta’s AI systems are highly accurate and bespoke to each application, meaning our systems are capable of more widespread task automation, and enable complex business analyses to be orchestrated by decision making end-users, reducing the number of people in the analytics-action loop. This drives significant cost reduction, as well as the ability to unlock deeper insights from data, leading to improved business decisions. 


Stay up to date with Apta's news to get early access to our system when it launches!

55 views0 comments

Recent Posts

See All

コメント


bottom of page