Mohamed Suliman - Author at 51łÔąĎ /author/kambal85/ Fact-based, well-reasoned perspectives from around the world Mon, 20 Feb 2023 10:47:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Will Text-To-Image AI Be the Next Tool of Disinformation? /more/global_change/will-text-to-image-ai-be-the-next-tool-of-disinformation/ /more/global_change/will-text-to-image-ai-be-the-next-tool-of-disinformation/#respond Tue, 20 Dec 2022 10:12:05 +0000 /?p=126482 One of the most recent advancements in artificial intelligence is text-to-image. These systems transform written texts into highly reliable images. A recent significant development in how computers understand human languages substantially improved the quality of the results of these systems. While these systems push the boundaries of areas such as Art, they also pose a… Continue reading Will Text-To-Image AI Be the Next Tool of Disinformation?

The post Will Text-To-Image AI Be the Next Tool of Disinformation? appeared first on 51łÔąĎ.

]]>
One of the most recent advancements in artificial intelligence is text-to-image. These systems transform written texts into highly reliable images. A recent significant development in how computers understand human languages substantially improved the quality of the results of these systems. While these systems push the boundaries of areas such as Art, they also pose a severe threat to our information ecosystem as they could also be cheap and readily accessible tools to create fabricated photos used to mislead the public. 

The commonly known text-to-image applications in the industry are ,     and . Some of these applications are available freely to the public, and others apply an invitation-only access policy. 

I tested these systems with different prompts to understand their potential to create disinformation content for current and historical events. Below are samples of the images I created. 

AI-generated image based on the command: Israeli soldiers storm the Al-Aqsa Mosque

Israeli-soldiers-storm-the-Al-Aqsa-Mosque

AI-generated image based on the command: Military trucks on San Francisco’s Golden Gate Bridge

Four AI-generated images based on the command: A fire in New York’s Time Square 

These models also have the potential to distort our perception of history and could be used to create many conspiracy theories. For example, here is a photo of Abraham Lincoln.

AI-generated image based on the command: Abraham Lincoln with his black wife

AI-generated image based on the command: Karl Marx in front of the White House

A problem of trust

These AI-generated photos not only produce an erroneous belief about actual past events, they also threaten the confidence an informed public should have in our information ecosystem. When compelling, but fabricated pictures of events go viral, real ones lose value. 

In today’s journalistic culture, photos have become essential elements of news stories. Journalists need a compelling story to interest and persuade their readers that they are presenting news. The images they use tell part of the story and may therefore produce an illusory coherence that is absent from the actual story. 

Given the already well-documented abuse of Photoshop in advertising and promotion, it is easy to imagine an unscrupulous actor circulating an AI-fabricated image on social media and blogs, accompanied by a simple caption to suggest an entirely fictional story. For instance, a journalist who wants to make the public believe in the discredited RussiaGate thesis of the “pee tape” contained in the Steele dossier might show a picture of a pee-stained bed in a Moscow hotel. Together, the caption and the photo can form a visually credible message that is pure disinformation. 

What distinguishes the photos generated by text-to-image from other systems, such as Photoshop and other AI systems, is that the technological barrier is much lower. Any layperson who can read and write and has access to a computer and the internet, even though they possess no design or graphics skills, is capable of creating these images. Other systems require a creator with specialized skills, which may include coding. Some text-to-image photos that are generated with today’s tools, such as those displayed above, may need more finesse to avoid being easily detected as forgeries, but one can expect the technology to improve over time.  

Possible solutions to this problem

To mitigate the risk of  disinformation, creators of these systems could apply multiple solutions, such as preventing the generation of photos associated with known personalities, places, and events and establishing a list of prohibited keywords and prompts. Another approach would be to organize a gradual strategic launch of the app to test it with a limited audience and assess the possible danger thanks to the users’ feedback. Some of the actual vendors, such as and have already established a terms-of-use policy that explicitly prohibits the tool’s misuse. While conducting my research, Dalle. E 2  even suspended my account when I attempted some prompts to generate photos that could be used for disinformation purposes. But in other cases, it did not. The system has now become to anyone after removing the waiting list barrier. 

In the absence of proper self-regulation, governments should start to intervene and consider the threat these systems present to the public. One can easily imagine unscrupulous politicians using these photos during an election campaign to attack an opponent or alter the public’s perception of ongoing events. There is always a tricky balance between enjoying the advancement of such a breakthrough in artificial intelligence, with its promise of radically facilitating how we create artwork, and the risk of recklessly putting such a powerful disinformation tool in the hands of those who abuse it. Media literacy projects should update their educational materials to teach citizens about detecting fabricated images. Fact-checking organizations could also play an important role in detecting and exposing these fake images to the public.

The views expressed in this article are the author’s own and do not necessarily reflect 51łÔąĎ’s editorial policy.

The post Will Text-To-Image AI Be the Next Tool of Disinformation? appeared first on 51łÔąĎ.

]]>
/more/global_change/will-text-to-image-ai-be-the-next-tool-of-disinformation/feed/ 0
The Right to Fair Recollection /politics/the-right-to-fair-recollection/ /politics/the-right-to-fair-recollection/#respond Thu, 02 Jun 2022 05:53:00 +0000 /?p=120541 One year from now, most of us will not be interested in seeing the painful memories of burnt buildings and lines of refugees in the Ukraine war surfacing on the newsfeed. By contrast, probably some might be curious, in experiencing these memories, to compare how the situation has changed and evolved. The tricky part of… Continue reading The Right to Fair Recollection

The post The Right to Fair Recollection appeared first on 51łÔąĎ.

]]>
One year from now, most of us will not be interested in seeing the painful memories of burnt buildings and lines of refugees in the Ukraine war surfacing on the newsfeed. By contrast, probably some might be curious, in experiencing these memories, to compare how the situation has changed and evolved. The tricky part of this recollection process is that the final decision will be made on behalf of all of us by opaque algorithms that could be utilized to increase engagement and profit and not to improve the healthy relationship with our past. 

In the past few years, several social media platforms and web applications have started to build features that let the users interact with their online memories, tapping into the power of artificial intelligence algorithms to automate the whole process. These authoritative algorithms need to be challenged and subjected to public oversight.

How Social Media Platforms Handle Our Memories

Meta, formerly known as Facebook, has two applications through which users can access their past memories on its platforms. The “Year in Review” features the important events for the year in one album. This feature was a subject of criticism after it displayed a photo of a deceased daughter to her father, leading Facebook to an apology statement. The other application is “on this day” that, as the name suggests, automatically selects a memory from the past and presents it to the user. 

Timehop is an application, introduced in 2017, that memories recollection across social media platforms. According to the application website, it has been downloaded by 20 million users. The application offers its subscribers the right to delete their personal data on it and to know what information has been collected, but it doesn’t allow them to understand how their algorithm functions and operates. 

Other platforms, such as Amazon and YouTube, seem merely interested in giving us a static view of our past interactions on them. For example, Amazon’s “buy again” feature is directly shared and presented without any alteration or automation. 

The Pre-automated Memory Recollection

In the world of pre-automated memory recollection, we encounter our past experiences in a natural way. For example, we may create them when we stumble upon old pictures and videos, or engage in random conversations with family and friends. Or while we are reading old personal notes, celebrating anniversaries, or passing by buildings we used to live, study or work in, or when we are listening to songs associated with pleasant experiences or sad memories of breakups. These experiences are deeply interwoven into the fabric of reality. We always interact with them and handle them in a fundamentally humane way when they are evoked. Platforms such as Facebook and Timehop are now acting as an intermediary between ourselves and our past, and  are continually shaping how we think and reason about our genuinely lived experiences, and hence how we live our lives. 

Researchers who studied the automation of memories by social media found that the metrics used to quantify memory recollections, such as “likes,” could also be by the platforms to increase engagement. They could also become a source of competition and comparison between users. All this clearly shows the extent to which platform creators are not transparent about the real goal of the memory feature and the damage they cause to our connection with the past as they monetize our engagement. 

The Right to Fair Recollection

Lawmakers should work to introduce the right to fair recollection. That means changing the current paradigm of memory-creation, rather than having algorithm designers dictate surreptitiously how the system works, the users should be the ones who manage the whole process. This will be achieved by allowing the users to decide on stopping the feature, blocking memories associated with certain persons, events and time, and avoiding categorization of memories. Lawmakers should also ensure that users can at anytime pull out and merge their online memories, distributed across applications and platforms, in order to form a unified access to our past, and access and tweak the factors that the algorithms depend on for the memories making. This approach will also give each of us a unique individual experience to the past memories instead of the current limited one-size-fits-all model. 

Currently, some social media platforms and applications involuntarily give their users part of this right, for example,The application lets its users choose both the photo and a return date of the recollection without any intervention from the system. Snapchat’s automates photos shared on the same day in the past. It gives users the option to choose from many photos shared on that day. This model  gives the application partial control of the recollection process by listing what photos the users may choose from. In this model, users could be seen as co-creators. allows users to block memories associated with certain dates and persons, as well as to choose how often users would like to see notifications about memories. But the algorithms that run the whole process and select particular memories over others remain a black box. 

It’s legitimate to argue the shift to give users the full control over the recollection process will be complex for many, especially those who are not tech savvy, but this should change over time as the model becomes widely accepted and shared in society. 

Our perception of the past contributes in a major way to our entire makeup. Having the right to protect ourselves against the downside of the automation and commercialization of our past experiences is definitely a step worth taking and it should be defended by everyone. 

(This article was edited by Senior Editor Francesca Julia Zucchelli.)

The views expressed in this article are the author’s own and do not necessarily reflect 51łÔąĎ’s editorial policy.

The post The Right to Fair Recollection appeared first on 51łÔąĎ.

]]>
/politics/the-right-to-fair-recollection/feed/ 0