The CN Tower is not on fire today, but a deepfake video may be making some people think it is.
What is the CN Tower deepfake video and why is it going viral ×
Already a Subscriber? Sign in You are logged in Switch accounts Secure transaction. Cancel anytime.
A video that appears to have been generated using artificial intelligence has gone viral after being shared on Facebook. The video appears to show smoke and flames pouring out of the CN Tower while spectators look on. Audio on the video includes a voice saying, "What the hell is that?", followed by "Oh my God." The video has been viewed millions of times, shared more than 7,000 times and has received more than 5,000 comments since being published on Sept. 22.
ARTICLE CONTINUES BELOW
As of Sept. 23, if you type "CN Tower" into Google search, one of the top suggested searches you'll see is "CN Tower fire," which will surface news articles from 2017 when the CN Tower was actually on fire, though that fire was far less significant than the one that appears in the deepfake video.
A Metroland Media article on the CN Tower fire from 2017 has been read thousands of times today already as people search for information.
ARTICLE CONTINUES BELOW Who posted the deepfake video? ARTICLE CONTINUES BELOW
Adrian Gee is the name of the Facebook page that posted the video. The page has 1.6 million followers and the intro to the page reads, "Creator of viral moments since 2014. Now teaching the future: AI-generated art & content."
ARTICLE CONTINUES BELOW
"I'm 2 blocks away from CN tower and it's totally fine. I'm looking right at it fake video," reads one Facebook comment calling into question the authenticity of the video.
"Thanks for the update, these (this is) fake news, it (is) going to hurt people, because there someone's love ones working by watching these videos," replied another comment."
Others directed people to the news published in 2017.
CN Tower officials confirm: 'The video is entirely fictional'
"The CN Tower is one of the most photographed landmarks in Canada and we love seeing it featured in stunning photos and creative videos online. But not everything you see is real," CN Tower officials said in an emailed statement Tuesday, Sept. 23.
"A recent video circulating online appears to show the CN Tower on fire. This video is a deepfake and entirely fictional. There is no fire, and the CN Tower remains safe, secure and fully operational. Unfortunately, this is not the first time AI-generated content or visual effects have been used to create misleading depictions of the CN Tower."
ARTICLE CONTINUES BELOW Viral deepfake videos are not new
Deepfake videos going viral is a growing phenomenon, and experts warn they will become harder to spot.
Earlier this year, a series of videos of a man complaining he couldn't find work because of immigrants turned out to be AI-generated deepfakes.
The man in the videos is not even real, according to a CBC investigation.
How can an average person spot a deepfake?
Deepfake detection is becoming harder because the technology is continuously improving, Abbas Yazdinejad, post-doctoral researcher at the University of Toronto's Artificial Intelligence and Mathematics Modelling Lab, previously told Metroland.
However, there are a few textbook signs people can still watch out for.
ARTICLE CONTINUES BELOW
These include:
Unnatural facial movements: Look for inconsistencies in blinking, lip-sync or eye direction that don't always match the audioSkin texture and lighting mismatch: AI can sometimes struggle with realistic lighting transitions in videos, especially around the hairline, ears or jawOdd hand or body motion: Deepfake generators may sometimes distort complex body gestures or interactions with objects, making them seem offFlat emotional expression: Even in high-quality deepfakes, Yazdinejad said micro-expressions can seem muted or roboticAudio-video mismatch: Deepfakes can sometimes show subtle lags or imperfect alignment of voice tone and facial emotion and these subtle cues can be a clue
A reverse image and video search can also be performed to trace where else the video or image was used and can sometimes lead to the source.
To limit distribution of deepfakes, Yazdinejad said experts are working on both technical detection (e.g., watermarking and provenance tracking) and policy frameworks.
Deepfakes will become harder to detect
Ali Dehghantanha, founding director of the Cyber Science Lab and the master of cybersecurity and threat intelligence program at the University of Guelph, said deepfake technology will become increasingly harder to detect.
As algorithms become smarter, a more foolproof way to identify them is still to "always be skeptical" and verify the source, said Dehghantanha, who co-founded AI cybersecurity firm Avaly.ai.
ARTICLE CONTINUES BELOW
People should expect deepfake technology and its misuse to "explode" in the future and include co-ordinated attacks, political misinformation and propaganda in the next major election.
"We'll see (misuse of) these deepfakes explode significantly and there will be more of these fake characters and the harmful messages they're spreading," said Dehghantanha.
Until stronger solutions are in place, experts say the safest approach is to blend healthy skepticism with verification habits.
Harmful ways deepfakes are being used today
There are a variety of ways deepfake technology can be misused in harmful ways.
Yazdinejad said deepfakes can be used for:
Political misinformation and election interference: Synthetic AI-generated videos are used to fabricate speeches, events or behaviours of political figures, often to influence public opinionCharacter assassination and harassment: Deepfake videos can be used to weaponize and defame individuals by inserting real people into false or compromising situations to ruin their reputationExtremism and hate propaganda: Used to reinforce harmful stereotypes or fuel conflict and amplify public rage. Larger scale state-sponsored groups are using deepfakes for widespread misinformation campaignsFinancial fraud and impersonation: Audiovisual deepfakes are used to impersonate CEOs, for example, in highly targeted phishing campaigns, to trick people into transferring money or sending sensitive informationNon-consensual sexual content: This, according to Yazdinejad, is one of the earliest and still most prevalent misuse of deepfakes -- to create sexual content, which at times involves exploiting real minors ARTICLE CONTINUES BELOW
-- With files from Loraine Centeno
A.M. Headlines Newsletter Get our free morning newsletter
Error! Sorry, there was an error processing your request.
There was a problem with the recaptcha. Please try again.
Please enter a valid email address. Sign Up Yes, I'd also like to receive customized content suggestions and promotional messages from YorkRegion.com.
You may unsubscribe at any time. By signing up, you agree to our terms of use and privacy policy. This site is protected by reCAPTCHA and the Google privacy policy and terms of service apply.
A.M. Headlines Newsletter You're signed up! You'll start getting A.M. Headlines in your inbox soon.
Want more of the latest from us? Sign up for more at our newsletter page.