
Both human intelligence and human consciousness are the points of reference used to measure the capabilities of AI (artificial intelligence). Two types of AI are “weak” AI and “strong” AI. Human-like intelligence is currently displayed by computers performing mathematical calculations, processing machine-learning algorithms, and providing chatbot communications via natural language processing. However, even the most sophisticated calculations, algorithms, and chatbot conversations merely give the impression of human intelligence and are examples of “weak” AI. Although not a current reality, “strong” AI is an intelligence that has the potential to possess human-like cognitive abilities and possibly develop consciousness.
A recent document from the Vatican, Antiqua et Nova (Ancient and New), is subtitled, “Note on the Relationship Between Artificial Intelligence and Human Intelligence.” This document addresses AI and technology ethics in general and treats ten specific areas of inquiry in particular. The philosophical and theological insights of this document are an excellent read for anyone. And the following ten areas of inquiry should be of interest and concern for everyone.
- AI and Society
- AI and Human Relationships
- AI, the Economy, and Labor
- AI and Healthcare
- AI and Education
- AI, Misinformation, Deepfakes, and Abuse
- AI, Privacy, and Surveillance
- AI and the Protection of Our Common Home
- AI and Warfare
- AI and Our Relationship with God

If an AI can sense, respond to, and affect its digital and/or physical environment, then it is considered an intelligent agent or an intelligent robot. For example, consider how a digital thermostat senses change in temperatures and responds by affecting changes in its environment through a connected heating-ventilation-and-cooling (HVAC) system. Next, consider how robotic vacuum cleaners, lawn mowers, and especially autonomous vehicles (land, sea, air, and space) are specifically designed to interact with their respective environments through a series of sensors (for perception), motors (for movement), and actuators (for affecting change).
The science fiction stories of Isaac Asimov are classic illustrations of technology ethics in robotics. In his I, Robot series, the stories center around problems that arise when the ethical programming of robots follow Asimov’s Three Laws of Robotics. Although these laws contain inconsistencies and generate contradictions, it is Asimov himself who wrote the stories to demonstrate the inadequacy of these laws.
Law One – “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Law Two – “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
Law Three – “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”
Zeroth Law added later by Asimov as above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
~ Boethius ~