30.07.2024

Expanding the capabilities of generative AI - Is generative AI all we need?

Generative AI, with its ability to create content that mimics human-like understanding, has transformed various industries, from creative writing to software development, and so on. However, even the current state of the art methods can lead to the generation of inaccurate or nonsensical information, commonly referred to as “hallucinations”. This undermines the reliability of the generated content, which must be manually verified to avoid errors. The need to improve the accuracy of generative AI is therefore crucial. By focusing on incorporating robust verification methods, we can aim to minimize these errors. The Modeling Assistant is a step in this direction, aiming to equip the generative AI with these missing capabilities.

 

The idea

The Modeling Assistant leverages Large Language Models (LLMs), whose power has exponentially grown in the last years. Although extremely versatile and effective, these models lack specific capabilities, e.g. cannot perform math, validate output, and are missing specific knowledge of unique software which might be of interest. The Modeling Assistant fills this gap by managing extra tools that can identify and correct inconsistencies in the data produced and “empower” the LLM to perform more complex tasks. This improves the quality and trustworthiness of the content it produced and effectively expands the capabilities of the LLM.

 

The execution

One example of such an improvement is data validation. LLMs are capable of extracting data from text with high accuracy, though they do not possess a native way of checking the coherence of this data. This can result in “hallucinations” when performing such tasks. This is because the extracted data might have to follow specific rules for it to be valid, and these are not intrinsically encoded into the LLMs. Such an example could be extracting financial data from reports and checking the validity of the data: some numbers might not add up to the correct totals. While using the Modeling Assistant, this information will be extracted by the LLM, handed over to our backend and further processed before sending feedback to the LLM, which now has additional information about the validation process and can fix errors, if any.

 

How can you profit from it?

Although we are currently using the Modeling Assistant to aid the users of our own code, the nature of the architecture built around the LLMs is completely modular and allows the Modeling Assistant to be “shaped” in any form we like. Provided the different extra functionalities we should have, the tool belt of the Modeling Assistant is created and handled in a consistent way. This allows us to tackle problems of various nature and we are not restricted to a specific use case: from aiding users in the usage of a specific scientific software, to aiding in the extraction of coherent data from large documents, to structuring unorganized data into specific formats and so on.

 

Conclusion

This process has been filed for a patent application: this innovative and powerful interplay between the Modeling Assistant and the LLM further improves the capabilities of generative AI, opening up numerous possibilities for its applications.

 

Author: Dr. Fabio Covito, Lead Developer HQS Modeling Assistant, Senior Expert Condensed Matter Systems at HQS Quantum Simulations

Picture: HQS Quantum Simulations