People can hardly recognize AI generated media-new results from AI research
People can hardly recognize AI generated media-new results from AI research
People can hardly see AI generated media
ki-generated pictures, texts and audio files are so convincing that people can no longer distinguish them from man-made content. This is the result of an online survey with around 3,000 participants from Germany, China and the USA. The study was made by Cispa-Faculty Dr. Lea Schönherr and Prof. Dr. Thorsten Holz carried out and presented on the 45th IEEE Symposium on Security and Privacy in San Francisco. The study was carried out in cooperation with the Ruhr University Bochum, Leibniz University of Hanover and the TU Berlin.
The rapid development of artificial intelligence in recent years enables masses, texts and audio files with just a few clicks. However, this development carries risks, especially with regard to political opinion -making. With the upcoming important elections this year, such as the EU parliamentary elections or the presidential election in the USA, there is a risk that AI generated media will be used for political manipulation. This is a great danger to democracy.
The results of the study are amazing: people find it difficult to distinguish real media from AI generated media. This applies to all types of media such as texts, audio and image files. Surprisingly, there are only a few factors that can explain whether people are better in recognizing AI generated media. There are hardly any significant differences between different age groups, educational backgrounds, political attitudes or media literacy.
A quantitative online survey in Germany, China and the USA was carried out for the study. The participants were accidentally assigned to one of the three media categories (text, image or audio) and saw both real and AI generated media. In addition, socio-biographical data was collected about AI generated media and other factors. A total of 2,609 data records were included in the evaluation.
The study provides important insights for cyber security research. There is a risk that AI generated texts and audio files are used for social engineering attacks. Developing defense mechanisms for such attack scenarios is an important task for the future. In addition, we have to better understand how people can distinguish AI generated media at all. A planned laboratory study is intended to ask participants to see whether they recognize whether something is generated or not. In addition, technical processes for automated fact checking are to be developed.
The complete scientific publication of the study can be viewed under the link specified.
Table: Overview of the study results
| Media art | Number of really classified | Number of Ki-generated classified |
| ———– | ———————- | --————————— |
| Text | 65% | 35% |
| Image | 52% | 48% |
| Audio | 60% | 40% |
The study shows that the majority of participants classified AI generated media as man-made, regardless of the type of medium.
The results of this study provide valuable knowledge about the perception and recognition of AI generated media and represent important starting points for further research in this area. It remains to be seen how the development of artificial intelligence will develop in the future and what consequences this will have for society.
Kommentare (0)