Singapore Management University Researchers Develop VISTA Architecture to Embed Real-Time Moral Compass in AI Systems

Singapore researchers launch VISTA, a proactive safety architecture that monitors and corrects AI behavior in real-time using psychometric value factors.

By: AXL Media

Published: Mar 27, 2026, 6:50 AM EDT

Source: Information for this report was sourced from Singapore Management University

Singapore Management University Researchers Develop VISTA Architecture to Embed Real-Time Moral Compass in AI Systems - article image
Singapore Management University Researchers Develop VISTA Architecture to Embed Real-Time Moral Compass in AI Systems - article image

Shifting from Reactive to Proactive AI Safety

As artificial intelligence transitions from simple chatbots to operational controllers in logistics and resource scheduling, the risks associated with automated decision-making have intensified. Current safety protocols typically function as "wrappers" that check an AI's output only after a decision has been finalized, which Assistant Professor Zhiguang Cao of Singapore Management University (SMU) argues is often too late. To address this, Professor Cao is spearheading the development of VISTA (Value-Informed Safety and Trust Architecture), a system designed to monitor and regulate AI behavior during the reasoning and planning phases. This approach shifts the paradigm of AI trustworthiness from a reactive model to a proactive, internal mechanism.

The Five Pillars of AI Psychometrics

The VISTA architecture is built upon five specific psychometric value factors: social responsibility, risk-taking, rule-following, self-confidence, and rationality. These dimensions were selected based on large-scale studies that explain how both humans and AI models navigate complex tasks. By embedding these factors, VISTA creates a measurable "moral compass" that provides a transparent signal to guide AI behavior. While the system starts with these five pillars to maintain operational speed and stability, the modular design allows for the addition or adjustment of values to meet specific regional regulations or domain requirements.

Integrating Ethics into the Reasoning Loop

VISTA distinguishes itself from existing AI add-ons by sitting directly inside the "reasoning loop" of Large Language Models (LLMs). Rather than acting as an external filter that scans text for prohibited content, VISTA observes partial decisions as they are formed at near token-generation speed. This allows the system to intervene the moment a potential risk or ethical deviation is detected. By using lightweight value encoders, the architecture ensures that the alignment with social values does not result in significant latency, allowing the AI to remain efficient while becoming more socially aware.

Categories

Topics

Related Coverage