When thinking about the possible ethical issues arising from technology in a ToK context I start from the perspective of ethics rather than technology. In doing so I will be asking 4 big questions:

  • What are the potential ethical issues arising from technology?

  • Where do these ethics and technology issues sit on a ‘good-bad’ continuum?

  • What are the implications of these ethics & technology issues for the construction and acquisition of knowledge?

  • How might these ethics & technology issues evolve in the future?

What are the potential ethical issues arising from technology?

We tend to think of ethical issues arising from technology in modern terms, ie issues arising from modern digital technologies. However there have been ethical issues arising from technology since humans first started to utilise technology. In the modern age such issues rose to greatest prominence during the first era of industrialisation in the 19th Century.

I will look at ethical issues arising from technology through 4 main lenses / perspectives. Each perspective has both positive and negative ethical consequences within it.

4 lenses that have ethical consequences arising from the application of technology:

  • Accessibility

  • Health, productivity and fulfilment.

  • Diversification - Homogenisation

  • Privacy - Autonomy - Control

Accessibility.

Positive ethical consequences of improved access afforded by technology.

Technology both increases and decreases accessibility across a wide range of fields, and in multiple ways. A starting point to think about this is that technology should increase our capacity to manipulate the environment. The outcomes of such manipulation in terms of accessibility will reflect the value basis of those in charge of the technology. Whether this is a negative, or positive, ethical outcome will also largely depend on the value basis of those making the judgement. Let’s take the application of technology in a mediaeval village as a real world example. The application of technology to crop production (eg ploughing, irrigation, crop rotation etc) increases the yield of production. Thus increasing access to food, and the time available to residents to access other ways to spend their time. There are subsequent consequent improved access improvements such as access to healthcare, culture, further technological innovation etc.

The most obvious improvements in access realised by modern technologies are access to knowledge acquisition and knowledge production. The development of public libraries, public education, and the internet have significantly improved the access that people have to knowledge acquisition. The development of universities, and more recently Web 2.0, significantly improves the access that people have to knowledge construction.

Schutz: The Social Distribution of Knowledge.

The positive ethical benefits of such improved access are wide ranging, and potentially profound. The expansion of human and civil rights seen in many areas of the world can be understood as being augmented by the improved access to knowledge realised by technology. In his article The Well- Informed Citizen: An Essay on the Social Distribution of Knowledge Alfred Schutz explains how improved access to knowledge changes the classification of an elite ‘expert’ group who are afforded the right to produce ‘socially approved’ knowledge (that is knowledge afforded prestige and influence). Technology allows more people to join this group, adding a more diverse set of perspectives to socially approved knowledge. Schulz uses the models of social identity theory from Social Psychology to understand the possibly liberating effects that wider knowledge access can have for individuals in breaking down in group-out group stratification. Further, Schulz explains how wider access to knowledge increases people’s opportunities to question taken for granted assumptions, and therefore increasing the potential to develop ‘better’ knowledge, ideas that have built upon, and evolved from that which is pre-existing.

Friedman: The World is Flat.

Schutz rather prophetically wrote his article in 1946, many of his ideas were updated in 2005 by Thomas Friedman in his book The World is Flat: A Brief History of the Twenty First Century. Friedman describes how technology (mainly Web 2.0 tech) has increased access to knowledge acquisition and construction which, in turn, has enabled those in developing countries to compete financially, technologically, and culturally with those in the developed world. Here we see the argument that access afforded by technology has significant positive ethical consequences. 

Schutz rather prophetically wrote his article in 1946, many of his ideas were updated in 2005 by Thomas Friedman in his book The World is Flat: A Brief History of the Twenty First Century. Friedman describes how technology (mainly Web 2.0 tech) has increased access to knowledge acquisition and construction which, in turn, has enabled those in developing countries to compete financially, technologically, and culturally with those in the developed world. Here we see the argument that access afforded by technology has significant positive ethical consequences. 

Negative ethical consequences of improved access afforded by technology.

The most common concern raised with the increased access that technology allows is applied to digital technologies - the digital divide. We will come to that later in this section, but let’s first take a step back and consider the wider knowledge implications of increased access to technology.

The argument for the positive ethical benefits of increased access to technology is essentially that more of things (knowledge & its consequences) is a good thing. However, this doesn’t account for the context within which this change occurs, specifically the values and purpose of the wider society. That increased access will have outcomes which are, arguably, largely shaped by the value structures and power relationships of the wider society within which they operate. Namely, the value structure of the wider society. If the value structure is one in which power is used to restrict  the freedoms, or privacy, of the individual then this could be reflected in the wider access to knowledge afforded by technology. Not only does the mass populace have more access to knowledge, but powerful actors now have more access to the thoughts and behaviours of the population. We will come back to this in the section on privacy.

The concept of ‘more’ isn’t necessarily a positive ethical outcome, let’s go back to our mediaeval village. After the application of agricultural technology the people have more time to do things other than farming. However, if there’s a power structure which can direct how people spend their time (eg a Lord of the Manor) this extra time could be used for negative ethical outcomes such as warfare, environmental destruction etc.

The Digital Divide.

The digital divide refers to the unequal distribution of technology and internet access between different groups of people globally.  The divide creates a gap between those who have access to the knowledge resources opened up by the internet and those who do not, leading to unequal opportunities for growth, development, and success. For example Northern Europe has an ‘internet penetration’ (people who have access to the internet) of 95% whilst Africa has an ‘internet penetration’ of approximately 40% (“List of countries by number of Internet users”)

The digital divide affects individuals from different regions, socio-economic backgrounds, age groups, and cultures. People who live in rural areas, low-income communities, and developing countries often lack access to technology and the internet. This limits their opportunities for education, job training, and accessing information, which can lead to lower wages and limited opportunities for social and economic mobility.

The digital divide also affects businesses, as those without access to technology may struggle to compete in the global market. Moreover, it perpetuates existing inequalities, such as gender and race, as marginalised communities are often the ones who lack access to technology and internet.

The Digital Divide is an example of technology widening the gap of power between those with and without access to the technology. This is a recurring theme that we will see arising from many different types of technology across history.

Health, productivity and fulfilment.

I am rather uncertain about including a section on the applied ethical effects of technology as I run into danger of describing the real world ethical effects of technology rather than focussing on the ethical effects of technology on knowledge acquisition, construction and interpretation. However, as this is somewhat a false division, and students need to draw upon real world examples for both ToK assessments, I’ll go ahead and include this section anyway.

At prima facie level the immediate effect of the application of technology in area of life should be to increase productivity - this applies equally to the production of knowledge as it does to the production of ceramic vases, cars or pizzas. This increase in productivity has significant ethical effects. There is a long tradition of writers (eg sociologists, artists, economists etc) describing the negative effects of technology in the workplace. There is a significant body of research on the alienation and deindividuation experienced in the industrialised workplace. In his paper on Technology and Human Relations Carleton Coone argues that increased application of technology requires a more pressing focus on designing for human relations if we are to avoid the negative effects of that technology (such as alienation). Students can review the paper (referenced below) for real world examples.

On the other hand, Jon Shepard, in his 1973 research in the oil industry, found that the application of technology did not inevitably lead to powerlessness and alienation. This finding was underpinned by two main processes (i) technology increases productivity, and therefore increases the free time that workers have to spend doing things that they find intrinsically fulfilling. (ii) that increased specialisation of roles afforded by technology can actually increase the autonomy and personal involvement that a worker has in their job.

The ethical consequences of medical technology.

The positive ethical benefits of the application of technology in medical sciences are both obvious, and seem to be indisputable. Students and teachers can draw from a vast range of real world examples including the development of vaccine technology, the application of medical imaging technology, the application of pharmaceutical technology etc. However, there are, arguably, some negative ethical consequences of the application of technology in the medical sciences. The application of technology to medical knowledge has the greatest possible ethical consequence - that of maintaining, prolonging or ending life itself. As such the responsibility of the practitioner, and the accountability of those practitioners, is brought to the forefront. The ethical demands upon practitioners is greatly increased by the increased efficacy of the technology.

Peter Singer (2000) identifies 5 areas in which the ethical considerations have greatly increased in recent years:

  • Quality of end of life care.

  • Tavistock Principles to improve medical error.

  • Prioritisation and access to resources.

  • Stem Cell Research.

  • E-health and bio-global ethics.

I include these here to demonstrate the nuanced range of ethics and technology issues arising from the application of technology in healthcare and medicine. If ToK students want to draw real life examples from these 5 areas they could follow the article referenced below, or search JSTOR for research related to the area of their choosing.

Ethical concerns of medical technology.

In his article Too Much Technology Bjorn Hofmann argues that the overuse of, and over-reliance on,  technology is starting to have negative ethical consequences for the application of medical knowledge. He argues that the application of medical technology can actually be more harmful than positive, draws funding and resources away from functions which could have more positive effects, reduces efficiency, and leads to overdiagnosis. This last point is of direct relevance in ToK, and requires a little more examination.

Technology enables us to better identify disease and the causes of illness, further it helps us to better understand the interpolation of causal factors. As such it can allow us to develop new categories, and classifications, of medical conditions and illness. Therefore, the application of technology actually increases our construction of knowledge. Technology allows us to create more knowledge. An example of this from medical sciences is the recent development of a machine learning model which can predict rare diseases, even when these diseases aren’t represented in the data sample used (Singh).

Consequences

A consequence of improved medical technology is obviously increased human lifespan, which also has ethical consequences. The ethics of increasing human population on resource allocation and competition are well rehearsed, and easy to find sources for. As such I won’t detail them here. However, of direct relevance to ToK is the consequent increase in non-working (leisure) time, particularly the effects of the 4th industrial revolution with the application of artificial intelligence and machine learning.  Jandrić, Petar, and Sarah Hayes (2020) look at the range of arguments whether the 4th industrial revolution will result in fewer jobs (“technological unemployment”) and more leisure time, or lead to the creation of a whole new range of jobs (as has happened in previous periods of technological innovation). Both scenarios will lead to new forms of knowledge production, acquisition, and interpretation as they further shape, and redefine, the human-machine interaction.

Further references to the work of Heidegger here will be useful for those who wish to follow this line of argument. There are ancillary ethics and technology issues in terms of the varied access to these forms of technology, and the consequent implications for power distribution, and life fulfilment. 

Diversification - Homogenisation

Of more direct relevance to our Theory of Knowledge is the effect of technology on the type of knowledge produced, and the distribution of that knowledge. These arguments are most accessible to us in 2023 by considering the rise of digital technologies in the last 40 years, however the same principles apply to all previous forms of knowledge technology.

The oft cited promise during the mass adoption of the internet, and world wide web, in the mid 1990s was that many more people would have access to far more information faster and cheaper than ever before. Even taking into account the inequalities of The Digital Divide (discussed earlier in this article) this promise of mass access to mass information appeared to be accurate - this is the increased acquisition and interpretation of knowledge afforded by new technologies. The dawn of Web 2.0 in the early 2000s promised that many more people would be able to produce / construct knowledge through self publishing in written, visual, music etc form. Again, the rise of sites such as YouTube, Wordpress and TikTok would seem to have borne this promise out.

Concerns.

However, many writers have taken a more critical approach  to this apparent diversification in the production, acquisition and interpretation of knowledge. Kumar Sashi (2011) uses a Gramscian framework to understand the operational processes and effects of digital technology on knowledge production and acquisition. His argument is that digital platforms operate like conventional markets, making the knower a consumer, and producer’s effectiveness is determined by their power. As such production of, and exposure to, agglomerates to the most powerful actors (knowers). The effect of power based agglomeration is the homogenisation of knowledge. Rather than horizontal demarcation of knowledge producers we have vertical integration and monopolisation of producers. This process is exacerbated by internal promotion systems (such as Google search algorithms, or YouTube’s “Like” algorithms). This is even further exacerbated by globalising forces which enable knowledge producers to transcend national boundaries, time zones and localised practices.

The ethical consequences of homogenisation.

The ethical consequences of such homogenisation of knowledge production and acquisition are equity and accessibility issues (as touched on earlier). There will be amplification of certain knowledge producers (and their content) beyond their real world organic functional niche. As such, the homogenisation of knowledge production and acquisition means that a narrower range of people’s content will be seen, causing an unequal distribution of influence in the social and political spheres - some people have more influence in political processes than others (links to the rise of populist movements can be made here, there is substantial literature on eg the rise of Trump, Bolsonaro, Black Lives Matters Movement etc. Students who take DP Global Politics will be able to make clear links to their units on social movements in politics). 

Akin Unver, in the paper Politics of Automation, Attention and Engagement, argues that digital media platforms (eg Twitter, Tik Tok, etc) have become “political governance systems”. They allow both politicians and the electorate to bypass the traditional institutional gatekeepers such as the established media, institutions (eg Parliament), and opinion polling systems to communicate more directly with each other. Professor Unver explains the commodification of user attention, the rentier economic model of the private owned for profit platforms, and the effects of content control being moved beyond the traditional nation state. It is argued that this can lead to new forms of political ideology / techno-politico expression such as “cyber communism”, “networked feudalism”, and “Authoritarianism 2.0 and 3.0”. However, he argues that digital platforms are an imperfect democratic space which are playing the role of “saving democracy” in the new digital age. Their primary advantage over older forms of democracy is that they give users ‘sovereignty’ over their data and political voice.

Privacy - Autonomy - Control

There are well known, and widely discussed, concerns over the potential threats to the right to privacy raised by digital technology. The sharing of various forms of data with corporations and governments is an obvious consequence of digitalisation. However, similar concerns regarding  privacy also apply to earlier forms of technology. An example could be the invention of the camera in the 19th century meant that people’s public behaviours could now be permanently recorded. As the discussions regarding privacy are well rehearsed elsewhere I won’t focus on them here, rather I will look at the effects of technology on individual autonomy, and on social control.

Most technology shapes the behaviours that individuals are undertaking, this is especially true in the workplace. Probably best exemplified by the advent of the production line during the 19th century industrial revolution in which people were often employed to carry out a small range of tasks with little or no latitude for variation. The increase in control over our environment afforded by technology somewhat paradoxically leads to a decrease in our freedom of choice of behaviours (eg hunter gatherers used a fire for warmth and protection at night, but now they are tied to staying near the fire as it provides so much more warmth and protection than any other resources available at that time). As such it can be argued that technology decreases our physical autonomy. 

The ethics of the production line.

We can develop the physical autonomy argument further when we consider the role of control afforded by technology. The production line controls the pace of the individual’s work, and some corporations require their workers to wear tracking devices so that they know where they are at all times, etc . The examples are myriad. However, for our theory of knowledge we need to consider whether such control can also be applied to knowledge and thinking. In this we can refer back to the section re. Homogenisation and selection of messages in digital media earlier this article. We can apply the same models to earlier technologies - eg book publishers, religious organisations, even theatre producers have all been powerful gatekeepers during earlier stages of history. The important emphasis of the argument here is that technology amplifies their control.

Biotechnology and autonomy.

Further, the ethics and technology issues pertaining to autonomy and control are further exacerbated when we consider medical technology, particularly recent developments in bio-technologies. Some companies have been making employees wear fitness trackers for the past few years. The reasons given are the health benefits, but this also, undoubtedly encroaches on the individual’s privacy, and potentially their autonomy. There is now increased used of internal biological monitoring systems (eg monitoring cardio-vascular metrics), again the positive ethical benefits of health come with concerns over possible infringements of autonomy and control.

A further concern for autonomy and control arises from the increasing use of artificial intelligence and machine learning (see earlier post on AI). Artificial Intelligence is increasingly performing processes and functions that were previously within human control. Whilst the initial coding of the AI may determine how it performs those functions, machine learning enables the AI to learn, and adapt, it’s processes. As such, many argue that AI is reducing our autonomy over the functions that we give it (this is inherently the very point of AI). The ethical issues arising from AI include all of the above (autonomy, control, privacy, productivity, democratic, access). 

Conclusions.

This article is an overview of some of the ethical issues potentially arising from the application of technology for ToK students studying the Knowledge & Technology optional unit. The article doesn’t intend to look in depth at the particular ethical issues, this would be better achieved by students who have chosen to investigate further those issues. In summary we have broadly described that whilst technology may bring positive ethical outcomes the nature of its application can also lead to some ethical concerns.

Daniel, Lisbon, Feb 2023

Other posts on technology in this series:

We need to talk about Pune India.

What is Technology?

How does Technology change the pursuit of knowledge?

Does my thermostat have feelings? (Artificial Intelligence)

Bibliography

  • Coon, Carleton S. “Technology and Human Relations.” Proceedings of the American Academy of Arts and Sciences, vol. 75, no. 1, 1942, pp. 23–27. JSTOR, https://doi.org/10.2307/20023443. Accessed 7 Feb. 2023.

  • Friedman, Thomas L. The World Is Flat: A Brief History of the Twenty-First Century. New York, Farrar, Straus and Giroux, 2005.

  • Hofmann, Bjørn Morten. “Too Much Technology.” BMJ: British Medical Journal, vol. 350, 2015. JSTOR, https://www.jstor.org/stable/26518356. Accessed 7 Feb. 2023.

  • Jandrić, Petar, and Sarah Hayes. “Technological Unemployment and Its Educational Discontents.” The Digital Age and Its Discontents: Critical Reflections in Education, edited by Matteo Stocchetti, Helsinki University Press, 2020, pp. 161–82. JSTOR, https://doi.org/10.2307/j.ctv16c9hdw.13. Accessed 7 Feb. 2023.

  • Kass, Leon R. “Ageless Bodies, Happy Souls: Biotechnology and the Pursuit of Perfection.” The New Atlantis, no. 1, 2003, pp. 9–28. JSTOR, http://www.jstor.org/stable/43152849. Accessed 8 Feb. 2023.

  • KUMAR, SASHI. “Hegemony in Contemporary Culture and Media and the Need for a Counter Initiative.” Economic and Political Weekly, vol. 46, no. 51, 2011, pp. 38–43. JSTOR, http://www.jstor.org/stable/23065547. Accessed 7 Feb. 2023.

  • SCHÜTZ, ALFRED. “THE WELL-INFORMED CITIZEN: An Essay on the Social Distribution of Knowledge.” Social Research, vol. 13, no. 4, 1946, pp. 463–78. JSTOR, http://www.jstor.org/stable/40958880. Accessed 3 Feb. 2023.

  • Shepard, Jon M. “Technology, Division of Labor, and Alienation.” The Pacific Sociological Review, vol. 16, no. 1, 1973, pp. 61–88. JSTOR, https://doi.org/10.2307/1388654. Accessed 7 Feb. 2023.

  • Singh, Niharika. “Stanford Researchers Developed a Machine Learning Model Called POPDx to Predict Rare Diseases, Including Diseases That Aren’t Present in the Training Data.” MarkTechPost, 6 Feb. 2023, www.marktechpost.com/2023/02/06/stanford-researchers-developed-a-machine-learning-model-called-popdx-to-predict-rare-diseases-including-diseases-that-arent-present-in-the-training-data/. Accessed 7 Feb. 2023.

  • Singer, Peter A. “Recent Advances: Medical Ethics.” BMJ: British Medical Journal, vol. 321, no. 7256, 2000, pp. 282–85. JSTOR, http://www.jstor.org/stable/25225223. Accessed 7 Feb. 2023.

  • Unver, H. Akin. “DIGITAL CHALLENGES TO DEMOCRACY: POLITICS OF AUTOMATION, ATTENTION, AND ENGAGEMENT.” Journal of International Affairs, vol. 71, no. 1, 2017, pp. 127–46. JSTOR, https://www.jstor.org/stable/26494368. Accessed 8 Feb. 2023.

  • “List of countries by number of Internet users.” Wikipedia, https://en.wikipedia.org/wiki/List_of_countries_by_number_of_Internet_users. Accessed 3 February 2023.

Previous
Previous

ToK Exhibition - what do we know Feb 23?

Next
Next

Does my thermostat have feelings?