The expansion of AI opens unprecedented opportunities, yet it also raises urgent dilemmas regarding its responsible use.
The emergence of Artificial Intelligence in virtually every field has undoubtedly made a significant impact on all aspects of our lives. Surprising applications of AI technologies arise daily, and as companies find new spaces for their implementation, the impact on everyday life will only grow. Consequently, as with so many other technological breakthroughs, this accelerated progress also brings forth questions and scenarios that cannot be ignored. One of the most evident—and perhaps least discussed in depth today—is related to ethics and the responsible use of AI.
What is ethics? The Royal Spanish Academy defines ethics as 'the set of moral norms that govern a person's conduct in any area of life.' This concept is built through a process of reflection on one's own actions and their underlying motives, the internalization of specific moral standards, and the construction of a 'moral conscience' that guides decision-making in various contexts. It is easy to understand that ethics is not something that is 'injected' or simply learned; it is dynamic and evolves, based on an individual's experiences and supported by the capacity to reflect on one's own existence.
In philosophy, qualia refers to the subjective qualities or sensations perceived through personal experiences, which cannot be fully transmitted through words or objective concepts. Current AI models lack the capacity to autonomously reflect on their 'experiences' and, as a result, cannot generate a 'consciousness.' This limitation creates a scenario where the responsibility for defining the values and ethical principles that govern them rests definitively with the people and companies that make them available to their users and clients.
In this sense, transparency in the decision-making process of AI agents is fundamental to evaluating the ethical and moral impact they may have on society. An unidentified bias in an AI-driven system can lead to consequences ranging from discrimination to a severe impact on the reputation of individuals or companies, including legal ramifications for those involved. Ultimately, the lack of an ethical framework to define the responsible use of this technology can lead to scenarios where the adoption of these technologies is negatively affected by fear or distrust.
'Explainability' plays a crucial role: if we do not understand how AI systems reach their conclusions, unidentified biases may remain hidden and propagate over time. This places the spotlight on the ethics of those who design, build, and deploy these systems. The biases or intentions present in an AI agent are merely an amplified reflection of the biases or values of the people involved in its development and use. Therefore, the ethics of any AI-based system ultimately depend on the commitment and responsibility of those participating in its creation and application.
In the end, it is humans who decide to use AI for decision-making, and that implies a high degree of responsibility for the results obtained and the implications of those outcomes within their scope of application.
In November 2021, UNESCO established the first global standard on AI ethics. Its four fundamental pillars are human rights, fostering peaceful societies, ensuring diversity and inclusion, and promoting the protection of the environment and ecosystems. This standard promotes the principle that ethical and legal responsibility for AI must be attributable to natural or legal persons. Beyond specific legal frameworks that establish essential aspects such as personal data protection, Argentina has aligned itself with UNESCO’s recommendations regarding the definitions of responsible and ethical AI use.
Global technology giants—the primary drivers of mass AI adoption—promote the responsible use of these technologies by defining operational principles and recommendations for the use of their platforms and services. These definitions involve the participation of various corporate departments, representing different perspectives and insights into the daily use of this tool. The more diverse these viewpoints, the more representative the resulting ethical principles will be.
The governance and management of new technologies are essential to ensuring that AI adoption develops within an ethical and responsible framework. It is fundamental to understand that the principles and values guiding it are not the sole responsibility of technical departments; on the contrary, they should be approached through multidisciplinary work that incorporates the perspectives of various stakeholders. Beyond compliance with sector-specific legal regulations, it is essential to clearly define each organization’s values and operating principles so they are reflected in the systems they employ. Just as in human life, these definitions must be reviewed and debated periodically, ensuring a responsible and updated use of available technology.
The development and use of responsible, ethical, unbiased, diverse, and transparent AI—reflecting values that foster business evolution while remaining people-centered—is a commitment we must embrace and promote as facilitators of new technologies that are no longer aspirational ideas, but everyday realities.