Foreword:
Hi. This is my first article, might be a bit like a paper. I’m more used to those. If you have any questions or thoughts be sure to leave it in the comments below). This is a discussion post/article after all.
The article aims to discuss how AI (not only LLMs, but also advancements in data analysis) impacts the relationship of people as a whole. Public discussions around artificial intelligence are often dominated by a few common fears: loss of autonomy, mass surveillance, and the possibility of centralised technological control. However the fear of the grandiose, apocalyptic AI, obscures the aforementioned effects of AI on human interactions and day to day life. Those being atomisation and dependence, 2 things tied to how algorithms are used in social media.
This systemic perspective to me calls back to the situation in Stand on Zanzibar, where society struggled not from a single authoritarian power or rogue technology, but from the sum of rational, optimised decisions across many institutions. This framing is how I chose to examine the ethical risks of modern algorithmic systems.
Algorithmic Optimisation in Social Media Systems
Personalisation becomes problematic when it evolves into atomisation. In an atomised digital environment, individuals increasingly encounter information that aligns with their pre-existing preferences, while exposure to shared narratives or common frames of reference diminishes. Over time, the probability that two users are engaging with the same information, in the same context, decreases significantly.
The paper “Echo Chambers and Algorithmic Bias” puts the effects of how social media uses personalisation in clear terms.
Social media algorithms personalize content feeds, presenting users with information that reinforces their existing beliefs. This creates echo chambers, where users are isolated from diverse viewpoints
-(Salsa Della Guitara Putri, Eko Priyo Purnomo, Tiara Khairunissa)
This fragmentation does not require ideological manipulation or deliberate polarisation. It arises naturally from systems that prioritise relevance and engagement over commonality. The result is not more confrontation or arguments. The result is people who no are no longer discussing the same subject entirely.
In such an environment, social cohesion weakens not because individuals because the conditions necessary for collective understanding no longer reliably exist. Public discourse becomes a collection of parallel conversations, each internally coherent yet increasingly disconnected from the others.
Ethical Risks Beyond Control
Loss of Collective Agency
One of the most significant ethical risks of algorithmic atomisation is the slow erosion of collective agency. When individuals experience social issues through personalised informational streams, systemic problems become personal concerns. Political, economic, and even social challenges become matters of individual perception rather than a fundamental shared reality the public faces. This is not a one way system either. As many of the wrong people may assume their personal matters are matters they entire public face, further misaligning the individual’s personal and public perceptions
Collective action depends on shared awareness, on a population recognising not only that a problem exists, but one that realises it exists for others as well. Algorithmic personalisation undermines this prerequisite by fragmenting attention and experience. The result is a society that struggles to coordinate responses to large scale issues, even when technical capacity and resources are available.
Algorithmic Invisibility and Exclusion
A further ethical concern lies in the treatment of those who do not fit well within algorithmic categories. Social media and AI systems function by detecting patterns in data; users who generate limited engagement, atypical behaviour, or low-value signals are less likely to be prioritised, amplified, or even recognised.
This produces a form of exclusion that is neither intentional nor easily observable. Individuals and communities may find themselves algorithmically invisible. Unlike traditional forms of marginalisation, this invisibility does not provoke resistance or accountability, precisely because it lacks a clear source and lacks distinct human control.
From an ethical standpoint, this raises questions about fairness, representation, and responsibility in systems where harm emerges from a lack of action.
Normalisation of Systemic Harm
Perhaps the most challenging ethical issue is the diffusion of responsibility. Algorithmic systems are rarely controlled by a single actor; they emerge from interactions between corporate incentives, technical constraints, regulatory environments, and user behaviour. Each component may operate rationally and ethically within its own domain, yet the system as a whole produces harmful outcomes.
This mirrors a broader challenge in AI ethics: harms that arise without malicious actors are often the hardest to address. When no individual decision appears unethical in isolation, systemic consequences are easily dismissed as unintended side effects rather than ethical failures.
Social Media as a Case Study in Atomisation
Social media platforms provide a clear illustration of how algorithmic optimisation can undermine shared social space. News feeds prioritise emotionally resonant content, recommendation systems reinforce identity-based engagement, and ranking algorithms amplify content that maximises interaction regardless of social consequence.
Importantly, these systems do not impose beliefs or ideologies. Instead, they shape attention. By continuously selecting what is visible, relevant, and salient, social media algorithms influence how users perceive reality itself. The ethical issue is not persuasion, but selection: what is shown, what is omitted, and what is rendered invisible.
As engagement-driven systems scale, outrage, reinforcement, and emotional intensity become statistically favoured, while nuance, shared context, and slow consensus-building are deprioritised. The resulting environment rewards fragmentation without requiring any explicit intention to divide.
Implications for Future AI Adoption
The proliferation of artificial intelligence systems beyond social media into areas such as education, employment, healthcare, and public services can lead to an increase in atomisation. AI-powered personalised learning platforms, work allocation that adapts to individual needs, and algorithmic decision-making create opportunities for enhanced efficiency and maximisation of individual outcome; however, they can also present a further reduction of shared experiences associated with institutions. Systems of AI, if not scrutinised, can potentially lead to a minimisation of social cohesion, which is what binds us through trust, the ability to work in harmony with others, and the ability to share in the responsibility of our communities. Ethical considerations of AI should, thus, include consideration of systemic impacts on the reduce social cohesion of a society beyond concerns such as accuracy, discrimination, or transparency. In this regard, it will be necessary to understand that not every harm will manifest itself immediately or in a measurable way. Many of them will develop slowly due to the slow degradation of the shared frameworks on which people in a society rely on to interact effectively.
Ethical Considerations and Design Implications
Addressing algorithmic atomisation does not imply rejecting personalisation or AI-driven systems outright. Rather, it suggests the need for broader ethical metrics and design principles. These may include:
- Evaluating systems based on their impact on shared context, not only individual outcomes
- Designing mechanisms that preserve common informational spaces alongside personalisation
- Increasing transparency around optimisation goals and trade-offs
- Treating social cohesion as a legitimate design concern rather than an externality
Crucially, ethical AI design must acknowledge that some values (such as shared understanding and collective agency) are difficult to quantify, yet essential to preserve.
Conclusion
Technological systems rarely fail because they are outright malicious. More often, they fail and damage people due to excessive optimisation. As others have said:
Smart technologies facilitate precise and focused advertising and marketing efforts, potentially impacting user behavior and decision-making processes.
-(R. Wang et al., 2023).
The ethical challenge with respect to AI and algorithm-based systems in the present and future is not only to ensure that you do not allow individuals to be controlled or manipulated; but also to understand and address the invisible fragmentation of social reality that will occur because of the disruption of digital-digital relationships caused by changes to physical-digital relationships. The greatest risk we face isn’t a world where machines rule; but rather a world where individuals are ever more alone together (due to their ability to be optimally versus their collective social structures will continue to deteriorate).

Top comments (1)
Great first post!
It is sad more information leads to less informed people, it should improve life.
While we all have our safety bubble it is always good to step out and try to understand other perspectives. Sometimes both sides keep their perspective, and other times a seed is planted for adjustments to the bubble. The more adjustments you made the more you can reach other people.