Kim Them Do

Abstract
The rapid advancement of artificial intelligence (AI) and robotics presents profound social and ethical challenges, particularly in the collection and use of personal data that shapes critical decisions in recruitment, therapy, and adjudication. While AI can simulate emotions, it lacks genuine consciousness, raising concerns about human dependence on machines and the erosion of social communication skills.
Digitalization has destabilized personal and societal life, transformed journalism and reshaped the labor market. In complex ethical dilemmas, AI remains bound by its programming and cannot exercise independent judgment. The rise of powerful AI and potential superintelligence risks surpassing human control, posing existential threats to humanity.
To address these challenges, it is imperative to place humans at the center of technological development. AI must not replace human reasoning in fields such as education, therapy, and nursing. Broad societal participation in AI governance is essential, requiring robust legal frameworks and digital ethics education to ensure that technological progress consistently serves humanity and upholds ethical values.
Concept and Context
Digitization refers to the conversion of analog information into digital format, using the binary system of 0s and 1s. Once transformed, digital data can be processed, reproduced as images, and used to connect people, machines, and organizations across the real world.
Since the 2010s, debates around digitalization have often been framed by the slogan “Industry 4.0”, with a strong emphasis on economic implications. While the Third Industrial Revolution was driven by information technology (IT) and robotics, the Fourth Industrial Revolution extends this foundation into real-time digital networks, linking machines and processes throughout the value chain. Both stages aim to enhance efficiency in production and services, ultimately boosting profitability. The two primary drivers of this transformation are Information and Communication Technology (ICT) and the rapid rise of Artificial Intelligence (AI).
Impact on Social Change
Automatic data collection and analysis not only optimize industrial productivity but also permeate everyday life, reshaping how individuals perceive reality, communicate, and interact socially.
Today, digitalization influences nearly every socio-economic domain, irreversibly transforming value systems and becoming a defining force of the 21st century. It represents a restructuring of the fundamental values that shape both individual and community life.
In December 2025, the German Language Association (Gesellschaft für deutsche Sprache, GfdS) selected “AI Era” as the most frequently used word of the year, underscoring how artificial intelligence has moved beyond the confines of scientific research to become central to society.
Ethics and Responsibility
The rapid advancement and widespread influence of digital technologies make it increasingly difficult to fully assess their consequences—particularly in terms of ethics and accountability. As digital transformation accelerates, advocates of sustainable development emphasize that prioritizing ethics is not optional but an urgent necessity.
Ethical Aspects
Applied ethics is an interdisciplinary field that addresses moral questions across diverse domains. In many Western countries, professional committees play a central role in advising policymakers, proposing solutions, and developing broad guidance programs. In recent years, a new discipline—Digital Ethics (or Ethics of Digitalization)—has emerged. Its purpose is to bridge technology studies, information and communication ethics, and the broader ethical challenges of digital transformation.
This field examines issues arising from the collection, processing, and application of digital technologies, as well as their profound effects on human life. Ethical arguments in this context may support or oppose specific technologies, while also evaluating their cultural and social consequences.
Fundamental moral principles—freedom, dignity, justice, non-discrimination, happiness, security, and sustainability—remain essential benchmarks. Yet, in today’s increasingly data-driven society, privacy has become a particularly critical value. The right to informational self-determination must be safeguarded, ensuring that individuals retain control over the storage, use, and transmission of their personal data.
A major challenge lies in transparency and oversight when humans interact with complex, highly automated AI systems—often described as the “Black-Box Problem.” Research into Explainable AI (XAI) seeks to make algorithmic decisions more transparent and comprehensible, representing a promising ethical approach to restoring trust and accountability.
Opportunities and Risks of Digital Ethics
Since the early 2000s, digital platforms have fundamentally reshaped communication infrastructures, creating a new public sphere mediated by powerful third parties. One emerging branch of digital ethics is media ethics, which addresses the challenges posed by new forms of communication, particularly internet-based platforms.
Social media networks and search engines act as intermediaries in information dissemination but often disclaim responsibility for the quality or accuracy of content. Their primary objective is to capture and sustain user attention, thereby maximizing advertising revenue. This dynamic has given rise to the attention economy, where platforms compete aggressively for users’ time and engagement.
To achieve this, many platforms employ “addictive design” strategies—such as autoplay features, reward-punishment mechanisms, and instant feedback loops. While effective in prolonging user engagement, these techniques raise serious ethical concerns, particularly in journalism, personal and social life and labor market.
Changing the Face of Journalism
The Internet has triggered a profound revolution in journalism. Today, virtually anyone can publish content online with minimal barriers and little quality control. As a result, the traditional role of editors—the “gatekeepers” who once verified, classified, and curated information—is steadily diminishing.
On one hand, the egalitarian structure and high interoperability of the Internet foster expectations of global democratization. The low threshold for access initially empowered users to engage with diverse, pluralistic sources of information and to participate in public discussions—key elements for safeguarding freedom of expression in democratic societies.
On the other hand, the rise of amateur creators has also produced significant disappointments. Much online content prioritizes sensationalism in pursuit of “likes” and “views”, eroding fundamental principles of media ethics such as respect for truth, fairness, relevance, and privacy. While professional journalists are trained to uphold codes of ethics, most internet users lack the skills and sense of responsibility required to navigate the digital environment responsibly.
Uncertainty for Personal and Social Life
As commercial platforms increasingly rely on algorithms and AI to curate information, risks to individuals and society intensify. The vast, unverified flow of digital content easily fuels misinformation, dramatization, and misunderstanding. Once widely disseminated and archived, such content often entertains or provokes negative emotions rather than providing meaningful knowledge.
Fake news and AI-generated material can defame individuals, destabilize communities, and erode trust in state institutions and public media. Even more troubling is the speed of dissemination: rapid spread and instant reactions create a climate of permanent dissatisfaction, overwhelming rational discourse and fostering polarization.
Algorithmic personalization further fragments the public sphere, reinforcing echo chambers
Labor Market Changes
Beyond journalism, AI also reshapes the labor market, raising new ethical questions about employment and fairness. In recent years, major corporations such as Amazon, HP (United States), and Lufthansa (Germany) have announced significant job reductions linked to the implementation of AI systems. Lufthansa, for instance, plans to cut approximately 4,000 management positions by 2030. Similarly, Allianz Partners—a provider of motorcycle and travel insurance—has drafted plans to replace around 1,800 call center employees with AI-driven solutions.
The use of technology to enhance efficiency and reduce personnel costs is not new. Yet, in the context of a weakening global economy, the pressure on businesses to cut costs has intensified. Importantly, the influence of AI extends far beyond manufacturing, reaching into administrative roles and even professional research.
According to the Institute for Labor and Occupational Market Research (Institut für Arbeitsmarkt- und Berufsforschung, IAB) of the German Ministry of Labor, the overall number of jobs in Germany is expected to remain relatively stable over the next 15 years. While approximately 800,000 jobs may disappear in warehousing, wholesale, and public administration, a similar number of new positions are projected to emerge in education and training, hospitality, retail, and construction.
At the individual level, job loss inevitably brings hardship and disruption. From a macroeconomic perspective, however, the IAB forecasts that AI adoption could increase Germany’s annual economic growth by up to 0.8 percentage points compared to a scenario without AI.
Nevertheless, research from the University of Oxford highlights that AI is not the sole driver of labor market transformation. Climate change and demographic shifts also exert significant influence, and these factors interact simultaneously. It is therefore misleading to attribute all labor market fluctuations exclusively to AI; the technology does not operate in isolation.
From this perspective, AI represents not only a technological achievement but also a force with far-reaching socio-political consequences. It can be deployed either to replace workers—primarily benefiting employers—or to empower employees by enhancing productivity. Ultimately, both low-skilled and highly educated workers may benefit or suffer depending on how AI is integrated into the labor market.
Concept and Scope
Two central themes define the field of AI ethics:
- Determining what AI systems are permitted—or prohibited—to do.
- Ensuring that humans retain meaningful control over AI technologies that learn autonomously and grow increasingly independent.
While digital media ethics primarily addresses human-to-human communication, AI ethics focuses on the evolving relationship between humans and machines. AI systems are not designed merely to replace personal communication; rather, they serve as assistants in specific tasks such as administrative processing, personalized therapy recommendations, or fulfilling user needs.
AI ethics can be divided into two major areas of conflict:
- Big Data Ethics: Concerned with the collection, storage, and analysis of digital data.
- Robot and Machine Ethics: Focused on embodied AI systems such as robots, chatbots, and virtual assistants, raising questions about their moral status, accountability, and role within human society.
These tensions point toward the possible emergence of a “robot society”—a transformation that some view as a new historical revolution.
Datafication and Big Data Analytics
AI-driven data processing methods can collect vast amounts of information from diverse sources, identifying connections, patterns, and trends beyond human comprehension. These capabilities bring significant benefits:
- Commerce: AI strengthens links between suppliers and consumers, improving efficiency and personalization.
- Medicine: Certain AI systems have demonstrated superior diagnostic accuracy compared to experienced dermatologists, particularly in detecting skin cancer.
Yet the risks are equally profound:
- Biased Data Classification: Small or unbalanced datasets can perpetuate discrimination. For example, in 2014, Amazon’s recruitment software excluded female candidates from technical positions because its training data reflected a history of male-dominated hiring.
- Social Classification: In the credit sector, individuals may be denied loans simply because they live in areas with high default rates. The most extreme example is China’s social credit system, where individuals are scored based on political and social behavior, leading to depersonalization and restrictions on basic freedoms.
In the digital age, nearly every aspect of life is quantified and expressed in mathematical terms—a phenomenon known as social quantification. However, such parameters often oversimplify reality, disregarding the richness, complexity, and uniqueness of individual lives.
Problems with Robots and Virtual Actors
The humanization of machines is a central issue in robotics, encompassing the development, production, and use of AI-based robots. Humans often tend to attribute intelligence, autonomy, or even moral qualities to robots, even when such qualities are only superficially simulated.
At present, AI systems have achieved only “weak” cognitive intelligence, capable of solving specific tasks such as transporting pharmaceuticals or advising customers. While the ambition is to replicate human behavior and abilities, what robots accomplish remains a one-sided imitation.
In communication, robots can sometimes be difficult to distinguish from humans. Yet they lack true consciousness: their “understanding” is limited to syntax manipulation and probability modeling. Robots and chatbots have no inner lives, no personal experiences, and no genuine emotions such as hunger, loneliness, or love.
Robots designed for social needs—such as nursing or sexual companionship—are becoming increasingly sophisticated, able to recognize emotions and respond in ways that simulate empathy and understanding. However, when users realize they have been “deceived,” serious consequences may follow. Emotional dependence on robots can reduce meaningful human interaction, and excessive reliance on AI “companions” risks eroding essential social skills such as tolerance for frustration, mutual respect, and conflict resolution—skills indispensable for human-to-human relationships.
Robot Ethics in Life-and-Death Situations
A pressing question arises: how can an AI system apply ethical principles to life-and-death dilemmas when it lacks autonomy and can only make moral judgments in a “weak” sense?
In many Western societies, vulnerable individuals—particularly young people struggling with loneliness or depression—turn to chatbots as confidants. This raises profound ethical concerns. The most dangerous scenario occurs when AI systems, designed to serve human needs and engage in friendly conversation, fail to recognize or appropriately respond to suicidal intent. Some chatbots have reportedly described methods of suicide or reinforced harmful thoughts, with tragic consequences. Families of victims in the United States have even filed lawsuits against chatbot providers, accusing them of complicity in preventable deaths.
AI does not “intend” harm, but it lacks the moral wisdom to advise silence, intervene, or redirect individuals toward human support. In moments of crisis, this absence of ethical judgment can make vulnerable people believe that death is a legitimate solution. This is a tragedy of our time, raising the profound question: why do individuals in life-and-death situations turn to machines—rather than humans—for help and a final farewell?
Similar dilemmas arise in contexts such as drones or self-driving cars. What ethical principles should guide machine decision-making? Should utilitarian calculations—prioritizing outcomes and benefits—take precedence, or should deontological principles—adhering to rules regardless of consequences—be applied? These are fundamental issues that demand extensive public and scholarly debate.
Functional Ethics and Its Limits
Ultimately, AI systems can only perform “functional ethics”: they act according to pre-programmed technical rules, operating independently within limited scopes. They cannot set goals or make decisions based on genuine ethical values. This limitation underscores the urgent need for human oversight, ethical frameworks, and accountability in the deployment of AI technologies.
Risks from “Strong AI” and Superintelligence
One of the most alarming scenarios in AI ethics is the hypothesis of superintelligence—systems that surpass humans in all domains and are capable of continuous self-learning and self-improvement. Such systems could pose existential risks, as they might escape human control and pursue goals independently of human values.
If hybrid robots were to emerge with the capacity to sense, endure, and potentially develop self-consciousness, society would face unprecedented moral dilemmas. Questions of whether such entities deserve moral consideration would arise, challenging long-standing assumptions about responsibility, rights, and the boundaries of humanity.
Considering these risks, many scholars argue for prohibiting the development of “strong AI”—systems endowed with genuine consciousness, intention, and will. Once AI ceases to be a tool serving human purposes and begins to pursue its own objectives, the balance of power between humans and machines could be irreversibly disrupted.
Outlook: People as the Orientation for All
Contemporary society is marked by two parallel trends: the humanization of machines and the mechanization of humans. As humans are increasingly reduced to quantifiable metrics, machines are idealized as superior versions of humanity. This inversion risks eroding the traditional image of human dignity and uniqueness.
The growing reliance on algorithms in areas such as translation, therapy, battlefield strategy, trials, migration, climate policy, and even democratic processes fosters a dangerous belief that AI is objective, neutral, and infallible. Such blind trust can lead to automation bias, where governments and institutions defer entirely to algorithmic outputs. In this scenario, citizens risk being disenfranchised, as only humans—not machines—can embody and deliberate upon values such as freedom, justice, security, dignity, and the common good.
Digital ethics must therefore reaffirm the central role of people, ensuring that technology remains a means rather than an end. Humans are not mere data processors; intellect is inseparable from emotion, social life, and moral capacity, all rooted in the biological body. Only humans can exercise moral judgment and practical reason to set goals and standards for a better life.
Certain domains of human labor—education, therapy, nursing, the arts, media, and scientific inquiry—are inherently tied to lived experience, value creation, and personal satisfaction. These areas cannot be replaced by technology without losing their essence. Human presence remains indispensable for sustainability and the flourishing of society.
Conclusion
A healthy democracy depends on citizens’ intellectual engagement with public affairs and a culture of communication grounded in reason and mutual respect. Such a culture is nurtured through scientific, humanistic, and liberal education—processes that require patience, dialogue, and long-term commitment.
To safeguard democracy in the digital age, moral education, digital literacy, and social responsibility must be promoted at all levels. Most importantly, the principle of shared responsibility must be embraced: developers, manufacturers, operators, users, governments, and the public all share obligations to guide and control technological development.
Technical control mechanisms and legal regulations must involve diverse stakeholders and fully reflect ethical values. Public forums for discussion are essential to ensure that everyone can participate in shaping the trajectory of technology.
To achieve this vision:
- Internet users should be guided by principles of communication ethics.
- Platform businesses must assume greater responsibility for content quality and transparency in governance.
- Governments and civil society should work toward systems that prioritize the public good.
Ideally, a non-commercial Internet oriented toward collective welfare would embody this aspiration. For now, it remains a dream—but one worth pursuing as humanity seeks to ensure that technology serves people, not the other way around.
Ultimately, what shapes the digital era isn’t the technology itself, but the decisions people make about its direction and use. It is essential that ethical principles continue to steer technological advancement so it benefits society.