Recently, there has been increased research on the topic of artificially intelligent programs having the capability of developing advanced systems that are presently used by governments and organizations to analyze highly complex structures across sectors in ways not possible with conventional information technology. While some AI is subject to rigid testing and ethical reviews, other applications raise questions as to what governance structures are in place to control the risks to humanity and long term harmful economic and social consequences. This paper raises awareness about how governments and private industry face an unprecedented challenge in managing these complex systems that include regulators, markets, and special interests that all play a role in influencing the development of AI in different contexts without a full appreciation of the impact of AI on human rights and other consequences. The research focuses on three primary areas: (1) How AI technologies have evolved; (2) What are the major ethical and human rights issues evolving from the use of AI in the public and business environment; and (3) how can we improve our frameworks and governance structure for AI regulation. Through empirical evidence this paper explores the legal implications including the rights and duties of the government and private industry in protecting against unlawful intrusions into people’s lives, while at the same time advancing recommendations for accountability frameworks and regulations essential to ensure safety and security in advancing artificially intelligent systems.
|Title of host publication||Proceedings of the 15th International Conference on Cyber Warfare and Security ICCWS 2020|
|Editors||B.K. Payne, H. Wu|
|Place of Publication||Reading, UK|
|Number of pages||8|
|Publication status||Published - 12 Mar 2020|