TY - CONF
T1 - The SPATIAL Architecture
T2 - Design and Development Experiences from Gauging and Monitoring the AI Inference Capabilities of Modern Applications
AU - Ottun, Abdul Rasheed
AU - Marasinge, Rasinthe
AU - Elemosho, Toluwani
AU - Liyanage, Mohan
AU - Ragab, Mohammad
AU - Bagave, Prachi
AU - Westberg, Marcus
AU - Asadi, Mehrdad
AU - Boerger, Michell
AU - Sandeepa, Chamara
AU - Senevirathna, Thulitha
AU - Siniarski, Bartlomiej
AU - Liyanage, Madhusanka
AU - La, Vin Hoa
AU - Nguyen, Manh Dung
AU - Montes de Oca, Edgardo
AU - Oomen, Tessa
AU - Ferreira Goncalves, Joao Fernando
AU - Tanascovic, Illija
AU - Klopanovic, Sasa
AU - Kourtellis, Nicolas
AU - Soriente, Claudio
AU - Pridmore, Jason
AU - Cavalli, Ana Rosa
AU - Draskovic, Drasko
AU - Marchal, Samuel
AU - Wang, Shen
AU - Solans Noguero, David
AU - Tcholtchev, Nikolay
AU - Ding, Aaron Yi
AU - Flores, Huber
PY - 2024/7
Y1 - 2024/7
N2 - Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of-concept architecture that analyzes AI models in a human-in-the-loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in realworld industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.
AB - Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of-concept architecture that analyzes AI models in a human-in-the-loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in realworld industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.
M3 - Paper
SP - 1
EP - 13
ER -