top of page

Deepfakes: Weighing the Risks and the Benefits

  • 6 days ago
  • 6 min read

By: Darren Lee


As reported by Maria Negreiro, a Researcher at the European Parliamentary Research Service, an astounding 8 million deepfakes will be shared in 2025 up 7.5 million from 2023 and a projected 90 % of online content may be generated synthetically by 2026 (Negreiro, 2025). The use of deepfakes, realistic but fabricated videos, images or files created by Artificial Intelligence, has become a major issue for politics and society as a whole. Deepfakes can threaten democratic processes, institutional credibility, and blurs the line between reality and fabrication. This reflects the situation in Phillip K. Dick’s Do Androids Dream of Electric Sheep, where society has advanced so far the line between man and machine is blurred (Dick, 1968/2017). This issue is important as deepfakes not only undermine the reliability of information but also erodes trust in governmental institutions. The potential for deepfakes to be manipulated by foreign entities further exacerbates this since deepfakes can be used for political purposes. Therefore, because of its significance, this report will explore the potential risks deepfakes have for modern politics, including its effect on electoral outcomes, institutional trust, and inter-domestic politics, and it will delve into the emerging perspectives scholars have regarding its benefits.

In the modern day, deepfakes have become a powerful tool for manipulating electoral outcomes. An adjunct professor at John Hopkins University specializing in national security, Todd Helmus states that deepfakes can influence electoral outcomes by enabling the spread of convincingly real false content that have been crafted to paint other candidates in a negative light (Helmus, 2022). Helmus contends that deepfakes may be used to politically sabotage other candidates and sway entire elections (Helmus, 2022). Zia Muhammed, a Professor teaching cybersecurity at the University of Jamestown, further elaborates on this idea stating that deepfakes have turbocharged the spread of political sabotage (Muhammad et al., 2024). He explains that since deepfakes can be created within seconds, the speed at which disinformation is spread increases, which therefore increases the spread of political sabotage (Muhammad et al., 2024). A PhD holder at New York University, Swapneel Mehta, and his colleagues present a differing view, sharing that deepfakes could be used to sway public opinion during elections as it has the ability to bolster the credibility of certain narratives (Mehta et al., 2024). They elaborate on this contending that due to the speed and accuracy at which deepfake synthetic media can be created, certain candidates can use deepfakes to push positive videos about themselves (Mehta et al., 2024). Mehta and his colleagues state that a deepfake video that shows the candidate successfully delivering a flawless inspiring speech would be an example of this (Mehta et al., 2024). With a clearer picture, deepfakes have now become a tool for modern politics to either promote oneself or politically sabotage the other candidate. Taken all together, these viewpoints demonstrate how deepfakes can determine electoral outcomes and how technology is shaping the political sphere.

Additionally, deepfakes enable foreign states to spread propaganda more effectively and interfere with domestic politics through the use of synthetic media designed to destabilize entire nations. Nisha Rawindaran, a Professor for cybersecurity at the University of South Wales, asserts that deepfakes can be a weaponized form of propaganda used by foreign states to politically destabilize a nation (Rawindaran, 2024). Rawindaran explains that deepfakes, in spite of the fact that they are used in the media, can help spread propaganda from foreign entities and also involve foreign states into domestic politics (Rawindaran, 2024). This is further supported by Prakash L. Kharvi, a PhD student at the University of Marymount who published his report into a peer–reviewed journal (Kharvi, 2024). Kharvi declared in his report that deepfakes allow foreign state actors to create highly believable content to serve their interests which he supported by evidencing an incident that occurred in March 2022 where a Ukrainian news website was hacked to display a fabricated video of President Zelensky calling for his people to surrender to Russian forces (Kharvi, 2024). Deepfakes and propaganda have been increasingly hard to detect as well. A Senior Analyst at the EU Institute for Security Studies, which is an organization that primarily focuses on analyzing cross-border security challenges, Nad’a Kovalčíková raised that as artificial intelligence advances, the distinction between real and fake has become blurred and it poses challenges for detecting the manipulations by foreign states on domestic affairs (Kovalčíková et al., 2024). Collectively, these views show that as deepfake technology advances, foreign actors would gain more and more opportunities to influence domestic politics. The decay of the line between real and fake enables these foreign actors to destabilize such nations.

Public trust has also been eroded because of deepfakes through fostering widespread uncertainty and undermining the public’s ability to distinguish between real and fake. A Research Associate specializing in cyberthreats at the University of Tübingen, Maria Pawelec, presents another view that the mere existence of deepfakes undermines the public trust in these institutions and that the public may think that real content is false due to the blurred line between real and fake (Pawelec, 2022). She elaborates on her point declaring that due to the fact that deepfakes can be used by extremist groups to spread misinformation from trusted authoritative figures, public trust in government and its institutions has eroded (Pawelec, 2022). Pawelec then explains that this erosion is called the Liar’s dividend and she asserts that this could allow politicians, public figures, and more caught in genuine wrongdoing to dismiss real evidence as deepfakes and escape accountability (Pawelec, 2022). Pawelec’s statement is further delved into by Josh A. Goldstein and Andrew Lohn, both Research Fellows at Georgetown’s Center for Security and Emerging Technology, which is a non-partisan research organization aiding decision makers understand the complexities and challenges of new technologies (Goldstein & Lohn, 2023). The authors state that the liar’s dividend threatens to upend democracy’s foundations (Goldstein & Lohn, 2023). They elucidate on this point stating that because the liar’s dividend erodes trust in these public figures and governmental institutions, which they state is the core to democracy since citizens need to be aware of the facts and the truth to make informed decisions (Goldstein & Lohn, 2023). All in all, deepfake technology, from this view, has decayed institutional trust, which could upend the foundations of democracy.

While deepfakes are portrayed as harmful, several scholars also argue that this technology offers meaningful opportunities for social benefit when used responsibly. A lecturer at Lahore Garrison University, Hanan Sharif and his colleagues asserts that deepfakes can help educate students with a virtual tutor that is cheaper than real life ones while also providing a similar experience (Sharif et al., 2025). Sharif states that deepfakes can be used to create educational videos like the ones on Khan Academy and many other websites (Sharif et al., 2025). Abhinav Dhall, a Professor at Indian Institute of Technology Ropar, and colleagues also state that deepfakes can be used for translating real words into sign language (Dhall et al., 2025). Karima Ghediri, a lecturer at École Nationale Supérieure de Journalisme et des Sciences de l’Information, poses a new view on the issue (Ghediri, 2024). Ghediri states that the risks associated with deepfakes lay all on the intent of the user (Ghediri, 2024). She argues that deepfakes should be regulated to ensure responsible use of this technology (Ghediri, 2024). Together, the authors reveal that the value of deepfakes is not dependent on the technology itself, but rather how individuals choose to use it. 

Deepfakes effect on modern politics is significant, however, scholars have stated that deepfakes, if used responsibly and regulated, may be able to bring societal benefits in fields like education. Despite this, the potential risks of deepfakes still stand. Deepfakes can manipulate electoral outcomes, allow foreign players to enter domestic politics, and erode institutional trust, potentially even upending democracy. All in all, it remains to be seen what course of action would do society the best on this issue of deepfakes.


References

Dhall, A., Khan, M. R., Tariq, U., Colon, C. I., Nashash, H. A., & Naeem, S. (2025). Generation and Detection of Sign Language Deepfakes: A Linguistic and Visual Analysis. IEEE Transactions on Computational Social Systems, 1–11. 

Dick, P. K. (2017). Do Androids Dream of Electric Sheep? Del Rey, An Imprint Of Random House. (Original work published 1968)

Ghediri, K. (2024). Countering the negative impacts of deepfake technology: Approaches for effective combat. International Journal of Economic Perspectives, 18(12), 2871–2890

Goldstein, J., & Lohn, A. (2023). Deepfakes, Elections, and Shrinking the Liar’s Dividend | Brennan Center for Justice. Brennan Center.

Helmus, T. C. (2022). Artificial Intelligence, Deepfakes, and Disinformation: A Primer. RAND Corporation.

Kharvi, P. L. (2024). Understanding the Impact of AI-Generated Deepfakes on Public Opinion, Political Discourse, and Personal Security in Social Media: Vol. vol. 22 (pp. 115–122). IEEE Security & Privacy.

Kovalčíková, N., Filipova, R., Hogeveen, B., Karásková, I., Pawlak, P., Salvi, A., & Kovalčíková, N. (2024). Introduction: Critical Domains of Foreign Interference (pp. 2–4). European Union Institute for Security Studies (EUISS). 

Mehta, S., Kothari, N., Ranka, H., Pariawala, V., & Surana, M. (2024). Examining the Implications of Deepfakes for Election Integrity

Negreiro, M. (2025). European Parliament briefing: Children and Deepfakes (pp. 1–8). EPRS | European Parliamentary Research Service.

Pawelec, M. (2022). Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions. Digital Society, 1(2).

Rawindaran, N. (2024). Redefining Reality in Political Propaganda: Exploring the Impact of Superimposed Deepfakes in Misinformation Campaigns. Springer, 47–61.

Sharif, H., Atif, A., & Nagra, A. A. (2025). Deepfake-Style AI Tutors in Higher Education: A Mixed-Methods Review and Governance Framework for Sustainable Digital Education. Sustainability, 17(21), 9793.


3.1.2026

Comments


Want to Get Published? Contact Us

 
 

Ex. Junior at ABC High School in LMN City, XYZ Country

Please paste the link to the Google doc for your article here. Name the document what your article would like to be called and give thewritersorchard@gmail.com access to the doc. Our team will create a copy, edit it for any grammatical mistakes without changing content, and send it back via email along with a few comments and suggestions about any structural or organizational issues. This might take up to a week. If there is anything else you want the team to know, feel free to send an email!

bottom of page