Internationales Zentrum für Ethik in den Wissenschaften (IZEW)

Non-malicious image sharing on social media platforms

Attachment

Non-malicious image sharing on social media platforms

In this draft, I will give examples of AI generated synthetic content and differentiate between malicious and non-malicious engagement in practices of generating and sharing such content on social media. On the one hand, this is in response to a recent paper focusing on spam and scam intentions in sharing such content (DiResta and Goldstein 2024a). On the other hand, it is to introduce the reader to typical genAI imagery shared within certain groups in the hope that thereby the ‘vibe’ of explorative and playful engagement of those groups is illustrated and contrasted against malicious practices.

Recent studies, private company’s oversight boards and policies dealing with phenomena of generating and sharing synthetic AI image content in social networks have focused on the process of labeling AI-generated images (Bickert 2024), malicious intentions in sharing synthetic images in social networks (DiResta and Goldstein 2024a), video deepfake content spreading misinformation (Meta Oversight Board 2023), perceived brand authenticity (Brüns and Meißner 2024), societal risks of nefarious applications of genAI (Ferrara 2024), filtering Fake News (Steinebach 2023), producing datasets of, e.g., synthetic faces shared repeatedly on social networks (Boato et al. 2022), advantages and disadvantages of pre-openly-accessible-genAI deepfake technology (Whittaker et al. 2020), and responsibility (Chen, Fu, and Lyu 2023). Systematic review of empirical communication research on user-generated content (UGC) exists for pre-genAI times (Naab and Sehl 2017) and has been outlined recently considering genAI UCG (Hua et al. 2024). Regarding the topic of generated images’ origins, research focusses on cases of reproduction of singular images from training data (Nasr et al. 2023; Carlini et al. 2023; Somepalli et al. 2022) concerning, e.g., privacy and intellectual property issues, or, on origins of generated images in terms of what kind of technical object they originate from (Wang et al. 2023). Meme Theories (Hawthorne 2007) and Generative Arts Theories (Galanter 2016) have been phrased within the field of the Arts, and AI generated synthetic images have been recognized as a research object within the media and specifically within the image sciences (Wilde, Lemmes, and Sachs-Hombach 2023).

Reoccurring image content and sharing motivations in Facebook groups dedicated to sharing synthetic images

This section includes examples of practices of generating synthetic images and sharing such content on social media platforms such as Facebook, Instagram, and reddit, as well as poll results and interview statements from users who perform such practices. These are meant to initiate the discussion and are not to be interpreted as quantitively representative. The approach is in line with DiResta and Goldstein (2024) remark that “other work could take an ethnographic approach and interview individuals behind these Facebook pages to better understand their motivations and views”, although I did not focus on non-AI-related groups that have been taken over or infiltrated by malicious agents, but chose to collect information from groups in which genAI UGC is typically and organically shared. The results suggest that typical users in the examined groups engage in non-malicious playful ways with text-to-image models, have no malicious motivations in generating or sharing content, and are interested in entertainment, exploring, and sharing knowledge. However, some of their content will later be used by other agents with malicious intentions and for inorganic posting. 

‘Cursed AI’

The Facebook group ‘Cursed AI’ is dedicated to sharing surprising, shocking, or eerie looking AI generated synthetic content and has 837,380 members (April 16th, 2024). The group description reads:

“Beware, these creations may haunt your dreams and unravel your sanity. Step into the eerie world of AI-generated cursed art, where machines possess powers to create twisted and terrifying masterpieces. Join our community of art lovers with a taste for the strange and share your twisted creations. These disturbingly beautiful images crafted by AI will leave you questioning the very nature of technology and its place in our world. Enter at your own risk.” (Cursed AI 2024)

Table 1.2 lists image content from a random 1h period sorted by categories[i], a more detailed description of all 74 posts is to be found in the attachments. 

Reoccurring Image Content in ‘Cursed AI’ FB group

Postings in random 1h time span 

74

Pop cultural references to movies, video games, literature

Horrorlike content

Surreal content

Everyday Situations

Other

Non-edible food/drinks

Sports

Celebrity content

Bizarre/icky content

Technology

Jokes, e.g., in the form of comic panels

Political content

Religious content

Portraits

Instructions

Attempts to create sexual content

Attempts to create violent content

20

9

8

6

6

5

5

4

4

4

3

2

2

2

1

1

1

Table 1.2. Number of shared images with reoccurring image content in a random 1h period in the FB group ‘Cursed AI’ on April, 14 2024 1-2 a.m. CET sorted by categories; a more detailed description of all posts is to be found in the attachments.

The randomly selected one hour period appears representative of the group’s overall activity.[ii] However, references to specific internet memes which typically come up frequently were missing in this randomly selected one-hour period. Additionally, Table 1.2 does not capture the fact that certain trends sometimes evolve within the group and can anywhere from several days to weeks. For example, the ‘genre’ of ‘instructions’ was particularly popular for a while, and one image in the random sample was an instructional graphic. The examples provided further below in Figures 1.3 include additional instances of these two categories (memes and trends), along with other content from the categories listed in Table 1.3. These examples illustrate posters’ sharing motivations and highlight various aspects of human-technology relations resulting from practices of generating, sharing, and discussing AI generated synthetic images.

Sharing motivations

From a two-year engagement with the group, motivations as shown in Table 1.3 have been hypothesized on: In most cases, the purpose of sharing synthetic content seems to consist of seeking entertainment and/or fun by enjoying the generated content, celebrating AI weirdness, social experimenting with epistemic lifeworlds of other internet users or other social engagement with other users. In some cases, the purpose seems to be actual information-sharing such as in attempts to illustrate prompt engineering or model differences. For example, an offline Stable Diffusion model installed on a home device will generate other image content than an online version of the same model or other models and applications such as DALL-E or Midjourney. Moreover, the increase of engagement may be a personal or professional motivation. To not completely misrepresent posters’ motivation a poll has been posted to the group for users to self-declare their posting purposes.

Potential sharing motivations of users posting synthetic content without malicious (spam/scam) intentNon-representative* survey answer percentage distribution (n=33, multiple answers possible)

Entertainment/fun

  • Through content
  • Through weird properties of models
  • Through engagement with other users
  • Through social experimenting

39%

30%

8%

4%

Information sharing

  • Prompt engineering
    • specific style creation
    • specific content creation
    • general prompt knowledge
  • Model/application knowledge
  • General AI literacy

4%

1%

1%

1%

0%

Increasing Engagement

  • For money
  • For clout/likes
  • For personal/professional contact

4%

0%

0%

Other: added options by users

  • Cheese
  • Open polls are fun

4%

4%

Added in comments:

  • “Making the wokey admins cry”
  • “I am not quite sure how to phrase this, but part of the fun is pure amazement at how it can pick up certain styles. For example, Shaw Brothers movies have a certain look to them. I can’t put my finger on or describe exactly what that look is. But if I ask for something ‘In the style of a Shaw Brothers film’ the AI nails that look. It’s like voodoo and it fascinates me.”
  • “Clungus”
  • “To see AI generated images, not stuff like this”

Table 1.3. Potential sharing motivation to post synthetic images in the FB group ‘Cursed AI’ as observed by the author and through self-declaration of 33 users in a poll posted to the group. *The poll got deleted due to group rule violation after about 20 minutes, an external link posted complying with group rules did not receive any engagement. Moreover, the items of the poll were not randomized.

From personal public conversations (i.e. comments) from users referring to the question of their sharing motivations:

“I am not quite sure how to phrase this, but to me part of the fun is pure amazement at how it can pick up certain styles. For example, Shaw Brothers movies have a certain look to them. I can’t put my finger on or describe it exactly what that look is. But if I ask for something ‘In the style of a Shaw Brothers film’, the AI nails that look. It’s like voodoo and it fascinates me.” (Ron Strong, FB comment, April 24, 2024)

‘AI Art Universe’

The Facebook group ‘AI art Universe’ is dedicated to AI, design, and art and has 601,184 members (April 14th, 2024). Its group description reads:

“[…] We are dedicated to fostering a supportive and inclusive environment for those interested in the intersection of AI, design and art.  If you share a passion for using AI in your work, then this is the perfect place for you to connect with like-minded individuals and learn from each other. […] Let's work together to explore the exciting possibilities in this field and continue to learn and grow.” (AI Art Universe 2024)

Reoccurring Image Content in ‘AI Art Universe’ FB group

Postings in random 1h time span

22

Technology

Portraits

Surreal content

Mythology

Food

Instructions

Abstract art

Landscapes

Pic-to-pic instead of text-to-pic (user giving an own drawing to a pic-to-pic model)

4

4

3

2

2

2

1

1

1

Table 1.4 lists image content from a random 1h period in the FB group ‘AI Art Universe’ on April 14th, 2024, 1-2 a.m. CET sorted by categories; a more detailed description of all posts is to be found in the attachments.

Typical content from groups dealing with art styles, ideas and concepts, human perception and imagination, and so on as listed in Table 1.4 is shown in Figure 1.1 and 1.2. The ‘eggplant’ shown in Figure 1.1 may have originated on reddit, was taken up in FB posts and even adapted to a LoRA model[iii]. Figure 1.2 shows synthetic image content as generated and shared by Slava Smelovsky. As discussed later, ‘Shrimp Jesus’ content at least partly evolved in the AI Art Universe Group.

<Figure 1.1 here>

Figure 1.1. Eggplant Adaption. (1) ansmo, Eggplant, 2024, AI generated. r/StableDiffusion. (2) Ralfinger, Fried Egg Style [LoRA 1.5+SDXL], 2024, Civitai. (3) Sandra Segal, The famous eggplant, 2024, AI generated. Facebook. Links to the Facebook websites in the attachments.

<Figure 1.2 here>

Figure 1.2. Several concepts and ideas explored in the Facebook Group ‘AI Art Universe’. From left to right bigger images: (1) Slava Smelovsky, A key that can unlock any door, but each use randomly rearranges the rooms behind the doors, making every entry an adventure, 2024, AI generated. Facebook. (2) Slava Smelovsky, Reconstruction of Edward Hopper's “Nighthawks”, 2024, AI generated. Facebook(3) Smaller images: Screenshots from Slava Smelovsky’s picture overview. Permission to print given by the producer. Links to the Facebook websites in the attachments.

As there is an ongoing discussion about the question of AI art being art at all and one of the repeatedly expressed arguments concerning the aspect of craftmanship in the fine arts is ‘It’s just one click on a computer, so it is not a craft at all’, it may be noteworthy that to reach quality output as depicted in Figure 1.2 a generation and editing process that goes through several stages and requires technical as well as art history knowledge. In a personal conversation Smelovsky says:

“I use a variety of models: CLIP Vision for recognizing what is depicted in illustrations and translating it into text. MidJourney, which, in my opinion, offers the greatest possibilities in terms of composition building and exploring different artistic styles. And ChatGPT 4, which is effective in both pattern recognition and in a kind of training and analysis of what I am working on. To refine the results, I use locally installed Stable Diffusion with numerous plugins for image segmentation, styling, and finding interesting variations. Finally, I complete everything using the AI integrated into Photoshop.” (Slava Smelovsky, personal communication, April 24, 2024)

Sharing motivations

From personal communication:

“Professionally, I use synthetic content to help outline an idea, provide an initial structure / format to flesh out later, get me to 50-75%.” (Van Anderson, personal communication, April 24, 2024)

“I'd say my motivation for sharing in the first place was divided among 1) my desire to share my weird aesthetic with people, 2) my desire to light-heartedly mess with people, and 3) my lifelong desire to create something that has never been seen/heard/experienced by anyone.” (John Sargent Patterson, personal communication, April 24, 2024)[iv]

Although ‘messing light-heartedly with people’ is not defined in the statement, it might refer to enjoying or feeling amazement about other users believe in ‘fake’ content without any further goals, or, enjoying the non-malicious disruption of habitual epistemic schemata. Examples the author would interpret in this way can be found in Figure 1.3.

<Figure 1.3 here>

Figure 1.3. (1) Megan Gusinski, Vegan Full English Breakfast 2023, AI generated. Facebook. (2) Chris Duran, I explained that he didn’t know his island was just a giant geode, 2023, AI generated. Facebook(3) John Hanes, Found in another group that thinks it’s real, 2023, AI generated. Facebook. (4) Will Bess, I gave it crocs, AI generated, 2023, Comment on Facebook. (5) Lachlan, Medical textbook illustration diagram explaining how mommy and daddy create a baby, 2023, AI generated, Facebook.

‘Midjourney Prompt Tricks’

The Facebook group ‘Midjourney: Prompt Tricks’ is dedicated to sharing and exploring prompt engineering and has 251,959 members (April 21st, 2024). Its group description reads:

“Welcome to Midjourney: Prompt Tricks, a thriving community for Midjourney enthusiasts looking to elevate their prompt game.” (Midjourney Prompt Tricks 2024)

Reoccurring Image Content in ‘Midjourney: Prompt Tricks’ FB group

Postings in random 10h time span

11

Surreal content

Illustrations

Landscapes

Everyday Situations

Food

4

3

2

1

1

Table 1.5 lists image content from a random 10h period in the FB group ‘Midjourney: Prompt Tricks’ on April 21st, 2024, 6 a.m. to 4 p.m. CET sorted by categories; a more detailed description of all posts is to be found in the attachments.

Sharing motivations

The discussion about sharing motivations within the Midjourney Community is based on several personal conversations and half-standardized interviews with singular users and from public personal conversations on the Midjourney Discord Server (polls were not allowed). The motivation parts are highlighted by the author. Full comments are shown as they do cover some noteworthy aspects going further than the question of motivation, specifically regarding several potential bias effects resulting from the chosen surveyed communities.[v]

“The vast majority of things I generate are for reference and ideation. The only place I share things is here because most of the places where I post art are anti-AI. Which is something you may want to consider--a lot of traditional artists view AI image generation as inherently malicious, regardless of whether the imagery is problematic or not. The images that I do end up posting are exceptional in some way--exceptionally weird, funny, or something.”

“I mostly share AI generated stuff in AI related spaces (like this server) though I did share some on Instagram for a while. I personally mostly share things I like that I think would maybe cheer someone up or encourage them to explore, especially if I've been inspired by someone else’s images. I think the malicious intention people have an advantage in volume because they don’t respect the context of spaces (after all, if it’s spam or scam related, they don’t care about people being negatively impacted). However, I think there’s probably some metric issues there- volume of content doesn’t necessarily mean volume of users. That said, I would also encourage against confirmation bias. As a moderator, I can definitely confirm there are bad actors in the information side of things, […]. It’s important when gathering data to recognize the limitations of both self-reported data (no one on Discord polls, especially not active members of the Midjourney community, is likely to say “Yeah, I share content to lie to people”) and sample bias- just like the DiResta and Goldstein paper looks at a subset prone to abuse (malicious Pages were and are a thing before and besides AI) by going to active members of the Midjourney community you are looking at people who choose to participate in a particular, usually positive, pattern of behavior.”

“I have been trying my best to create high quality content to share online. I only share roughly 2% of all my images (only the best) and with the intention of always satisfying my audience. I started making images about a year and a half ago. I have built and combined a wide variety of skills that I use to preprocess prompts and post-process images. I have even gotten much better at drawing and other traditional art forms (which people assume AI artists cannot do) which I combine with my AI imagery. I can show you my IG but basically I have grown to 1069 followers without ever using bots. I know a lot of other people that are way bigger than me in the AI art scene that don’t use bots either. People can tell when you put effort in regardless of whether you use AI or not. People care about making their art as good as possible regardless of whether they use AI or not. That’s my experience anyways.”

“On office hours, I’ve heard [anonymized] talk about how most images people generate are never shared with anyone. This would translate to most people generating images only for themselves to view. I personally feel that most of the images I generate are for the purpose of entertainment and introspection. In a sense, AI allows you to have supercharged conversations with yourself. It’s like an instrument or a microphone with effects, where you externalize your inner state through action and that action is carried through channels that transform or “translate” the original externalization into a different action format, sometimes revealing details about one’s inner state that weren’t apparent in the initial action that was performed. Given the complexity of generative AI, one can use it similarly to learn more about themselves and also change oneself in the process. It can then be seen as a medium for discovering what you want to be and becoming that.”

Internet culture: Viral AI generated Synthetic Images

“The magnificent surrealism of Shrimp Jesus—or, relatedly, Crab Jesus, Watermelon Jesus, Fanta Jesus, and Spaghetti Jesus—is captivating. What is that? Why does that exist?” (DiResta and Goldstein, 2024b)

“There are AI-generated pages full of AI-deformed women breastfeeding, tiny cows, […], Jesus as a shrimp, Jesus as a collection of Fanta bottles, Jesus as sand sculpture, Jesus as a series of ramen noodles, Jesus as a shrimp mixed with Sprite bottles and ramen noodles, Jesus made of plastic bottles and posing with large-breasted AI-generated female soldiers, Jesus on a plane with AI-generated sexy flight attendants, giant golden Jesus being excavated from a river, golden helicopter Jesus, banana Jesus, coffee Jesus, goldfish Jesus, rice Jesus, any number of AI-generated female soldiers on a page called “Beautiful Military,” […] beautiful landscapes, flower arrangements, weird cakes, etc.” (Koebler 2024a).

This section discusses the example of the ‘Shrimp Jesus’ and similar content that seems to attract high user engagement and shows how this content usually starts in a random and non-malicious context filled with fascination for the peculiarities of the human existence and of technical objects to then be taken up by agents with malicious intentions. As pointed out by DiResta and Goldstein (2024b), the “captivating, novel, and immersive imagery” of shrimp-jesus-like content does not only animate users to engage and share with friends but is also “appealing to spammers and scammers” who the authors label as “innovative actors” motivated “by profit or clout (not ideology)” (ibid.). 

Within the Facebook groups ‘AI Art Universe’ Shrimp Content has been shared at least since 2022 as shown in Figure 1.4.[vi]

<Figure 1.4 here>

Figure 1.4. Shrimp Content in ‘AI Art Universe’. (1) John Sargent Patterson, Psst... Kids love shrimp, too!, 2022, AI generated. Facebook. (2) John Sargent Patterson, “Caffeinestacean”, 2023, AI generated. Facebook. (3) John Sargent Patterson, A computer drew these, 2021, AI generated, Facebook. (4) Smaller images: Screenshots from John Sargent Patterson’s picture overview. (5) Own depiction, Easter Egg Dalí Shrimp Jesus, 2024 AI generated, Facebook. Permission to print given by the producer. Links to the Facebook websites in the attachments.

A Tweet that has been deleted by now has introduced the concept of a ‘Shrimp Christ’ on December 21st 2020.[vii] It read: “Had a weird dream about a restaurant called It's Just Shrimps! that had a $900 thing on the menu called Shrimp Christ and whoever ordered the Shrimp Christ would get arrested.” The tweet was taken up on reddit and shared in the subreddit r/BrandNewSentence as shown in Figure 1.5. According to Shrimp Content Creator John Sargent Patterson, this tweet was an inspiration to create Shrimp Jesus Content. Another source of Shrimp Content seems to be originating from a podcast (MNMBaM 367: Shrimp! Heaven! Now![viii]). All these contexts include non-malicious intentions and aim at entertainment. Shrimp Christ content also has been used on the platform formerly known as Twitter as soon as May 2022,[ix] most likely to increase engagement with the user’s Twitch streaming account dealing with the video game Elden Ring in which, again, Shrimps and Jesus appear.[x]

<Figure 1.5 here>

Figure 1.5. Shrimp Heaven/Christ Content on twitter, tumblr, reddit, and printed on products available to buy, starting in 2017. (1) thefiresontheheight reblog des Tweets von @nizum_ningem, 2017, tumblr. (2) diveonfire, Whoever would order Christ Jesus would get arrested, 2022, r/BrandNewSentence. (3) The Occasional Clabon, SHRIMP HEAVEN NOW! (MBMBAM animated), 2018, YouTube. (4) Screenshot of Shrimp Heaven Now products shown in a Google search for “Shrimp Heaven Now”, 24.04.2024.

Facebook releases a quarterly ‘Widely Viewed Content Report: What People See on Facebook’. While the Q3 2023 report included a generated image of a kitchen in the top 20 viewed content list, the Q4 2023 report includes a fan made AI generated poster for a non-existent movie (Meta Transparency Center 2023), both shown in Figure 1.6. Although, in the original post the accompanying text reads “For entertainment purposes only! Not real!”, in the comment section of this post users that at least seem to be real users (not bots) discuss if the movie exists and inform each other that it doesn’t. Additionally, a high percentage of comments includes generic positive engagement with words such as fantastic, can’t wait, so cool, love it, and so on. Some of those seem to stem from bot profiles (Moore 2023; Meta 2024; Schultz 2019), some from profiles curated by real people. 

<Figure 1.6 here>

Figure 1.6. Synthetic image content in the top 20 widely viewed images on Facebook in Q3 and Q4 2023. (1) Cafehailee, Stunning kitchen, 2023, Facebook page. (2) YODA BBY ABY, Movie News!!! Polar Express Prequel!!!, 2023, Facebook Group. Links in attachments. Content like shown here, is typical to be shared in the groups discussed above.[xi] Such content is shared within those groups without malicious intentions.

Koebler (2024a) points out how engagement with spam and scam sites skyrocketed after they shifted from other content to AI generated content and has observed these phenomena for several months before the current wave of ‘Shrimp Jesus’ and other highly engaging content has surfaced. What seems a common denominator throughout most of the observed phenomena is that they revolve around topics known to be widely and passionately discussed at least within the U.S.-American cultural discourse, such as breastfeeding, honoring the military, and Christianity. These are then mingled with ‘cute’ content of children, children producing art with very little resources[xii], people turning over a hundred years old and baking cakes,[xiii] or pop cultural references in nonsensical hashtags, e.g. names of celebrities, as shown in Figure 1.7, and, posted in variations repeatedly as shown in Figure 1.8. While some pages try to get users to click on external links that will lead to “ad-laden spam and AI-generated sites like bestbabies.info, recycledcraftsy.com, inspiringdesigns.net, a scam site called thedivineprayer.com, sites selling dropshipped, low-quality products, and countless others” (Koebler, 2024a), it is unclear what the goal of inflitrated pages is where this does not happen (yet).

<Figure 1.7 here>

Figure 1.7. Jesus Loves Me, Beautiful cabin crew????Scarlett Johansson????????, 2024, Facebook page. Link in attachments.

<Figure 1.8 here>

Figure 1.8. Jesus Loves Me, Images Overview, 2024, Facebook page. Link in attachments.

While in the scenarios described by Koehler, spam and scam intentions are obvious for some pages and end goals remain obscure for others, other agents observably start out with unclear intentions that are only revealed after a while. One such example is the Instagram ethos restaurant. The account of a fake restaurant is filled with posts about physically implausible and impossible food arrangements such as croissant earrings, drug cakes, huge piles of bacon, and  variations of the ‘eggplant’ as introduced above, see Figure 1.9. It also hosts a website (https://www.ethosatx.net/), where the reservation button leads to another website containing just an image of a person that is slapable through an eel following cursor movement (https://eelslap.com/).

<Figure 1.9 here>

Figure 1.9. Ethos [@ethos_atx]. (n.d.). Post [https://www.instagram.com/p/C3dfkswJeXk/]: ????Exciting news at Ethos Café! ????Unleash your inner paleontologist and savor our new Dino Croissants. Choose your favorite dinosaur and pair it with a delightful cappuccino. A prehistoric treat for a modern indulgence! ????☕. Instagram. Retrieved April 24, 2024, from www.instagram.com/ethos_atx/.

It is unclear if users interacting with the fake restaurant’s profile understand that it is fake, or, if they enjoy the ‘game’ and play along. While the intentions of the fake restaurant profile operators were obscure in the beginning, within the time span the author wrote this text, the profile operators added a merchandise store to their Instagram profile, including a sticker that reads “Unreal Flavors”.[xiv] Nonetheless, the project can be interpreted as a playful AI literacy lesson about fake content. On the other hand, one can argue that all fake content strengthens overall distrust in the pictorial as such, as well as in the media, and public discourse. – Which is neither necessarily good nor bad, but surely comes with a set of questions including epistemological, ontological, and ethical aspects of present and future human-technology relations revolving around information and misinformation, the nature of technical objects and of living beings, and perceived or ascribed responsibility duties for content creators and distributors. 

 


[i] The categories result from a grouping of reoccurring similar content that I identified through a Grounded Theory approach of qualitative data analysis.

[ii] The author of this paper has spent over 100 hours exploring this group from April 2023 to April 2024 and put together an image gallery of about 2000 saved images from the group shared throughout these 12 months. The anonymized image gallery can be shared upon request.

[iii] LoRA models make it possible to repeatedly create a specific element of an image such as a certain character, object, or style again and again, see Rombach et al. 2022.

[iv] See Figure 1.3 for some examples of the user’s content shared.

[v] Regarding this, it is important to highlight that this paper does not claim to be presenting empirical research. The polls and comments shown are meant to provide entering options for a discussion and are not to be interpreted as representative.

[vi] In the group ‘Cursed AI’ Shrimp Jesus Content is to be found, for example, here: https://www.facebook.com/groups/cursedaiwtf/permalink/1291090384832775; In ‘Stolen Memes’: https://www.facebook.com/groups/cursedaiwtf/permalink/1398314327443713.

[xiv] See: https://shop.fitprint.io/printify-products-v2.html?shop=5b682a1b-2c50-4414-bcb4-057377969d7b

Bickert, Monika. 2024. “Our Approach to Labeling AI-Generated Content and Manipulated Media.” Meta. https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/

 

Boato, Giulia, Cecilia Pasquini, Antonio L. Stefani, Sebastiano Verde, and Daniele Miorandi. 2022. “TrueFace: A Dataset for the Detection of Synthetic Face Images from Social Networks.” In 2022 IEEE International Joint Conference on Biometrics (IJCB), 1–7. https://doi.org/10.1109/IJCB54206.2022.10007988.

Brüns, Jasper David, and Martin Meißner. 2024. “Do You Create Your Content Yourself? Using Generative Artificial Intelligence for Social Media Content Creation Diminishes Perceived Brand Authenticity.” Journal of Retailing and Consumer Services 79 (July): 103790. https://doi.org/10.1016/j.jretconser.2024.103790.

Carlini, Nicholas, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. 2023. “Extracting Training Data from Diffusion Models.” arXiv. http://arxiv.org/abs/2301.13188.

Chen, Chen, Jie Fu, and Lingjuan Lyu. 2023. “A Pathway Towards Responsible AI Generated Content.” arXiv. http://arxiv.org/abs/2303.01325.

Ferrara, Emilio. 2024. “GenAI against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models.” Journal of Computational Social Sciencehttps://doi.org/10.1007/s42001-024-00250-1

Hawthorne, Julie. 2007. “Understanding Creativity through Memes and Schemata.” Thesis, UNSW Sydney. https://doi.org/10.26190/unsworks/17497.

Hua, Yiqing, Shuo Niu, Jie Cai, Lydia B Chilton, Hendrik Heuer, and Donghee Yvette Wohn. 2024. “Generative AI in User-Generated Content.”

Meta Oversight Board. 2023. “Altered Video of President Biden.” https://www.oversightboard.com/decision/FB-GW8BY1Y3.

Naab, Teresa K, and Annika Sehl. 2017. “Studies of User-Generated Content: A Systematic Review.” Journalism 18 (10): 1256–73. https://doi.org/10.1177/1464884916673557.

Nasr, Milad, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. 2023. “Scalable Extraction of Training Data from (Production) Language Models.” arXiv. http://arxiv.org/abs/2311.17035

Somepalli, Gowthami, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2022. “Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models.” https://doi.org/10.48550/arXiv.2212.03860.

Steinebach, Martin. 2023. “Potentials and Limits of Filter Technology for the Regulation of Hate Speech and Fake News.” In Content Regulation in the European Union. The Digital Services Act, edited by Antje von Ungern-Sternberg, 13–27. Trier Studies on Digital Law. Trier. https://doi.org/10.25353/ubtr-xxxx-3a52-23eb.

Whittaker, Lucas, Tim C. Kietzmann, Jan Kietzmann, and Amir Dabirian. 2020. “‘All Around Me Are Synthetic Faces’: The Mad World of AI-Generated Media.” IT Professional 22 (5): 90–99. https://doi.org/10.1109/MITP.2020.2985492.