May 2024: The CEO of the world's largest advertising agency, WPP, receives a video call. Voice, face, gestures – everything fits. Only: the person does not exist. A deepfake. The fraud narrowly failed. A company in Hong Kong was less fortunate: in February 2024, it lost $25 million to a fake CFO video call – all participants in the video conference were AI-generated. Such attacks are known as CEO fraud – and deepfakes take them to a new level. The number of cases rose by 1,740 per cent (North America) between 2022 and 2023.
This does not just affect CEOs. This affects everyone.
Traditional forensic methods provide circumstantial evidence – like fingerprints: helpful, but contestable. C2PA signatures function like a DNA match: practically unambiguous with a collision probability of 2-256. While circumstantial evidence yields probabilities, cryptographic signatures provide mathematical certainty – not absolute, but with a level of security that is physically impossible to overcome.
Table of Contents
Verification Tool
Free forensic analysis
What are deepfakes?
Definition and technology
Two Sides of the Coin
Creative use vs. dangers
Conclusion
Key takeaways
Check an image or video – now and for free
Have you found a suspicious image or video? Check it here in seconds. Our forensic tool analyses media on four levels: metadata, cryptographic signatures (C2PA), technical manipulation traces, and AI-generated artefacts.
How it works:
- Upload an image or video (or paste a YouTube/Vimeo link)
- The analysis runs automatically – no registration, no installation
- You receive an authenticity score with a detailed breakdown
Echt oder Fake?
So funktioniert's
Laden Sie ein Bild oder Video hoch – wir analysieren es auf 4 Ebenen:
Bild oder Video hier ablegen
JPEG, PNG, WebP, MP4, WebM – max. 20 MB
Beta · Ergebnisse dienen zur Orientierung und sind rechtlich nicht bindend.
How should the results be interpreted?
The authenticity score quantifies forensic indicators on a scale of 0–100%:
| Score | Meaning | Explanation |
|---|---|---|
| 90–100% | Cryptographically verified | C2PA signature validated – mathematically provable origin |
| 75–89% | No indicators | Forensic analysis unremarkable – but no absolute certainty without C2PA |
| 50–74% | Inconclusive | Mixed signals – can arise from legitimate editing |
| 35–49% | Inconsistencies | Technical anomalies detected – further checking recommended |
| 0–34% | Strong indicators | Multiple manipulation hints – high probability of forgery |
Direction: Higher score = more authentic, lower score = more suspicious. Without a C2PA signature, a maximum of ~85% is achievable.
Sources: C2PA Specification · HFMF: Hierarchical Fusion (WACV 2025) · MIT JPEG Forensics · Visual Counter Turing Test (2024)
Read more: What are deepfakes? · 6 Types · Protective measures
What are deepfakes?
The term combines "Deep Learning" and "Fake". It refers to media – images, videos, audio – that have been altered or completely newly generated using AI in such a way that they appear deceptively real.
The technology behind it: GANs, Diffusion Models and Autoencoders.
Six types of synthetic media
1. Face Swap
A person's face is replaced frame by frame by another. The AI learns to translate faces into a kind of "mathematical fingerprint" (Latent Space). The Encoder breaks down both faces into these codes, the Decoder pieces them back together – with Person A's face on Person B's head. Previously, this required hundreds of photos of both people – as with DeepFaceLab. Today, with modern tools like SimSwap, a single image is enough.
Face Swap in action: A face is seamlessly transferred to another body – Wikimedia Commons / CC BY-SA 4.0
Real vs. DeepFake: Nicolas Cage's face on Elon Musk – how Face Swap works – Wikimedia Commons / CC BY-SA 4.0
WIRED: Researcher explains Face Swap and deepfake technology
Outlook: One-shot face swapping will enable face exchange using a single reference image in the future. Current research such as GHOST 2.0 and DynamicFace focuses on temporal consistency in videos and identity preservation in extreme poses. Diffusion-based approaches promise even higher quality with lower training effort.
2. Face Reenactment (Facial Expression Transfer)
A face is animated with the facial expressions of another person – the "marionette technique". Face2Face (Thies et al., Stanford/TU Munich) extracts landmark points of the target face and transfers expressions in real-time. The technology makes it possible to put any words into a person's mouth without cloning their voice.
Shakespeare awakens: AI animates historical portrait – Wikimedia Commons / Public Domain
Pharaoh Tutankhamun: 3,300-year-old death mask awakens – Wikimedia Commons / Public Domain
Face2Face: Real-time facial expression transfer – Stanford/TU Munich (CVPR 2016)
Outlook: Audio-driven face animation combines speech synthesis with facial animation for fully synthetic video conferences. Research on neural head reenactment is improving the rendering of extreme head movements.
3. Voice Clone
AI generates speech in a person's voice. Tacotron 2 (Google) and VALL-E (Microsoft) require only a few seconds of audio material for convincing results. The technology analyses voice characteristics such as pitch, speaking rhythm, and timbre, and synthesises arbitrary texts in this voice.
Elvis is alive? AI-generated image shows the King in various stages of life – Midjourney / Wikimedia Commons / Public Domain
Prigozhin as AI art: When AI images go more viral than real photos – Wikimedia Commons / CC BY-SA 4.0
ElevenLabs: Cloning voices with a few seconds of audio
Outlook: Zero-shot voice cloning enables voice cloning without any training on the target voice. Emotional voice synthesis adds realistic emotions – joy, sorrow, anger.
4. Lip Sync (Lip Synchronisation)
Lip movements are matched to a different audio track. Wav2Lip (IIIT Hyderabad) maps phonemes to visemes (visual mouth shapes) and produces frame-accurate synchronisation. Someone "says" things that were never said – particularly dangerous in combination with voice cloning.
StyleGAN faces: These people do not exist – synthetic faces as a basis for Lip Sync – Wikimedia Commons / CC BY-SA 4.0
REAIM 2023: 'Real or Fake?' – Deepfake detection in a military context – Dutch Ministry of Foreign Affairs / CC BY-SA 2.0
Wav2Lip: AI perfectly synchronises lips to any audio
Outlook: High-fidelity lip sync using diffusion models is improving quality dramatically. Multilingual lip sync enables automatic dubbing across language barriers.
5. Full Body Puppetry (Full-Body Control)
A person's body movements are transferred to another – like digital marionettes. The First Order Motion Model (Siarohin et al., University of Trento) extracts keypoints from a driving video and transfers them to a static source image. This way, any person can "dance" any dance.
DigiDoug at TED2019: Real-time full-body puppetry – a human controls the digital avatar – Wikimedia Commons / CC BY-SA 4.0
AI caricature: Macron in front of protestors – even stylised images can appear deceptive – Wikimedia Commons / CC BY-SA 4.0
First Order Motion Model: Full-body animation from a single image – University of Trento
Outlook: 3D-aware body motion transfer considers depth and perspective. Neural body avatars enable photorealistic full-body deepfakes in real-time.
6. Fully Synthetic (Completely Invented)
Completely invented people, scenes, or events. StyleGAN (NVIDIA Research, Karras et al.) and Stable Diffusion generate photorealistic faces of people who never existed. This Person Does Not Exist demonstrates their persuasiveness.
Viral March 2023: Pope in a designer puffer jacket – millions thought it was real – Midjourney / Wikimedia Commons / Public Domain
January 2025: Fake image of the burning Hollywood sign went viral during the wildfires – Wikimedia Commons / Public Domain
StyleGAN: Journey through the latent space – morphing from face to face
Outlook: Video diffusion models generate entire video clips synthetically. Controllable generation allows precise control of the age, expression, and gaze direction of generated faces.
Two Sides of the Coin
Deepfakes are a tool – like a knife or a hammer. The technology itself is neutral. What matters is what it is used for.
| Area | Applications |
|---|---|
| Entertainment & Film | Digitally rejuvenating actors, posthumous appearances, lip synchronisation |
| Satire & Art | Political satire, digital art projects, creative experiments |
| Accessibility | Sign language avatars, personalised learning videos, communication aids |
| Gaming & VR | Personalised avatars, realistic NPCs, immersive experiences |
8 million deepfakes will be shared in 2025 – up from 500,000 in 2023. According to Europol, up to 90% of online content could be synthetically generated by 2026. The EU Commission estimates that 98% of all deepfakes are pornographic in nature.
Detecting Deepfakes
Perfect deepfakes are rare. Most leave traces – if you know what to look for.
Facial Edges
Look for unnatural transitions between the face and the background. Often the edges flicker, or the face hovers slightly above the body. This becomes particularly noticeable with head movements.
Blinking
Early deepfakes never blinked. Newer models are better – look out for asymmetrical blinking or a fixed gaze.
Lip Synchronicity
Do the mouth movements match the audio exactly? With voice clone overlays, minimal delays often occur – especially with "p", "b", or "m" sounds.
Shadows & Light
A single light source produces consistent shadows. In composited images, shadows sometimes point in different directions – a physical impossibility.
Eye Reflections
Eyes reflect their surroundings. In deepfakes, the reflections in the left and right eyes sometimes display different scenes.
Hair & Details
Hair is difficult to fake. Look out for unnaturally smooth contours, "melting" strands, or hair passing through objects.
What is metadata?
Metadata is "data about data" – invisible information within media files. It often reveals more about an image or video than the visible content.
| Type | What it contains | Forensic utility |
|---|---|---|
| EXIF (Exchangeable Image File Format) | Camera model, lens, aperture, ISO, exposure time, date/time, GPS coordinates | Shows what a photo was taken with and where. AI-generated images often have no EXIF data or only generic EXIF data. |
| IPTC (International Press Telecommunications Council) | Title, description, keywords, copyright, creator, contact details | Standard for news agencies. Professional photos have comprehensive IPTC data. |
| XMP (Extensible Metadata Platform) | Editing history, software used, presets, versions | Shows how an image was edited. Adobe software writes detailed XMP data. |
| ICC Profile | Colour space (sRGB, Adobe RGB, ProPhoto RGB) | Checking consistency: does the colour space match the purported recording device? |
| Thumbnail | Embedded preview | Sometimes the thumbnail shows the original image prior to manipulation! |
Example EXIF data:
Verification Tools
Reverse Image Search
Upload the image to Google Images, TinEye, or Yandex. Can you find older versions or the origin?
C2PA / Content Credentials
Some cameras and software cryptographically sign media. Tools like Content Authenticity Verify show the editing history.
C2PA – Der kommende Standard
Die Coalition for Content Provenance and Authenticity etabliert einen offenen Standard für kryptografische Mediensignatur. Hardware und Software attestieren Herkunft, Aufnahmezeitpunkt und Bearbeitungshistorie – eine lückenlose Chain of Custody für digitale Medien.
Steering Committee
Content Credentials: The CR Icon

C2PA = technical standard. Content Credentials = visible "CR" icon for users.
What does the CR icon show?
- Who created the media? (Camera, person, AI)
- Which software was used to edit it?
- Was AI used for generation?
- Which editing steps took place?
The trick: All details are cryptographically signed. If someone alters even a single pixel, the signature becomes invalid – manipulation is immediately recognisable.
Logo & Icon: Open Source · Linux Foundation
Check Now
Use our free verification tool at the beginning of the article to analyse images and videos for manipulation.
Rule of Thumb
The more important a piece of information is, the more sources you should check. A viral video without a verifiable source warrants scepticism – regardless of how authentic it appears.
Deepfake Detection Skill
Technical documentation for developers: PRNU analysis, IGH classification, DQ artefacts, semantic forensics, and LLM-augmented sensemaking.
What to do?
If you identify a deepfake
- Do not share – not even "as a warning". Every share increases its reach.
- Check the source – Where was the material first published? Are there independent confirmations?
- Report to the platform – Most platforms offer reporting functions for manipulated media.
- Document the context – Save a screenshot with the timestamp and URL.
If you are affected yourself
For deepfakes depicting you personally:
- Secure evidence – Screenshots, URLs, timestamps
- Contact the platform – Request a take-down
- Explore legal avenues – In Austria: § 78 UrhG (Right to one's own image), § 107c StGB (Cyberbullying)
- Seek support – Saferinternet.at Helpline, Rat auf Draht (147)
The Future
Generative models are evolving faster than detection methods. The strategic focus is shifting: proactive authentication is moving into focus instead of reactive detection.
C2PA: The New Industry Standard
C2PA (Coalition for Content Provenance and Authenticity) is the technical response to AI-generated content. The standard functions like a forgery-proof digital passport: every edit is cryptographically signed and added to the history.
Who uses C2PA today?
- AI Generators: DALL-E 3, Adobe Firefly, Google Gemini – all sign automatically
- Software: Adobe Photoshop/Lightroom save editing steps cryptographically
- Hardware: Leica M11-P, Sony cameras sign immediately upon capture; Nikon and Canon are to follow
- Smartphones: Google Pixel 10 (2025/26) with native support; Samsung Galaxy to follow
- Platforms: Microsoft 365 will introduce mandatory C2PA watermarks for AI content in 2026
In practice: A "cr" icon (Content Credentials) appears on websites. A click displays the complete history: "Original from camera X, edited with software Y on date Z."
Challenge: Screenshots and social media uploads can remove metadata ("stripping"). Therefore, invisible watermarks coupled with C2PA are also being developed.
Forecast: In 3–5 years, media organisations and public authorities will consider no file trustworthy without cryptographic proof of origin.
Klick lädt YouTube (Datenschutz)
Democratisation of Synthesis
Erosion of Synchronous Communication
Probabilistic Forensics
Cryptographic Provenance
Paradigm shift: From reactive detection to proactive authentication
Resources
For Schools & Lessons
Saferinternet.at provides teaching materials on the topic of "True or False on the Internet" – free of charge and tested in practice.
Technical Depth
The webconsulting Deepfake Detection Skill documents forensic methods in detail: PRNU analysis, IGH classification, semantic forensics, and more.
Further Links
| Resource | Description |
|---|---|
| Content Authenticity Initiative | Adobe-led initiative for media provenance |
| C2PA Specification | Technical standard for Content Credentials |
| Saferinternet.at | Austrian platform for a safe internet |
| DeepfakeBench | Academic benchmark for deepfake detection |
C2PA Test Files
Download these official test images from the C2PA Organisation and analyse them with the forensic tool above. This is how you understand how C2PA validation works!
These files have intact cryptographic chains – the validation should show "Valid":

Complete chain: Valid Adobe certificate, verified signature, unaltered claims.

Hardware signature: Signed by a C2PA-enabled camera at the time of capture.
How C2PA Validation Works
The cryptographic chain is checked step by step. The medium is only considered authentic if all four checks are passed. A single error in the chain means that subsequent steps can no longer be verified:
Valid C2PA Chain
Manipulated Chain
C2PA validation: Every layer must be intact – one error breaks the chain
All test files are licensed under CC BY-SA 4.0 . Source: c2pa-org/public-testfiles
Conclusion
Deepfakes are not a vision of the future – they are the present. The technology is evolving faster than detection methods.
Circumstantial evidence is like fingerprints (contestable), C2PA is like a DNA match (unambiguous).
Specifically, this means:
- Today: Use signal analysis and metadata as circumstantial evidence – but do not trust them blindly
- Tomorrow: Demand C2PA-verified media from cameras, software, and platforms
- Always: Critical thinking scales better than any algorithm
Deepfake Detection Skill
The skill in the webconsulting-skills collection documents forensic analysis methods in detail – from sensor fingerprints (PRNU/PCE) and compression artefacts to semantic forensics. Ideal for developers building detection pipelines or wishing to understand the topic more deeply.
This article serves for information and media literacy purposes. For legal questions, please consult experts.