Deepfake
Alexander Johnson was a former Tech Updates Editorial Fellow for Insider’s Reviews Archive. Based in Ohio, he maintains reviews and guides related to tech products, streaming services, and all things digital. An alum of Kent State University, he previously interned for The Nation magazine and fact-checked for Insider and other major national publications. When not working, he can be found in a book, toolbox, or Linux terminal.
The page you requested is not available.
The page you are looking for might have been removed, had its name changed, or is temporarily unavailable.
- If you typed the page address in the Address bar, make sure that it is spelled correctly.
- Open the www.newsday.com home page and look for links to the information you want.
- Use the navigation bar on the top to find the link you are looking for.
- Click the Back button to try another link.
- Enter a term in the search to look for information on newsday sites or the Internet.
- Please contact us to report a bad link or technical issue.
Deepfake
Login
- Home
- History & Society
- Science & Tech
- Biographies
- Animals & Nature
- Geography & Travel
- Arts & Culture
- Money
- Games & Quizzes
- Videos
- On This Day
- One Good Fact
- Dictionary
- New Articles
- Lifestyles & Social Issues
- Philosophy & Religion
- Politics, Law & Government
- World History
- Health & Medicine
- Science
- Technology
- Browse Biographies
- Birds, Reptiles & Other Vertebrates
- Bugs, Mollusks & Other Invertebrates
- Environment
- Fossils & Geologic Time
- Mammals
- Plants
- Geography & Travel
- Entertainment & Pop Culture
- Literature
- Sports & Recreation
- Visual Arts
- Companions
- Demystified
- Image Galleries
- Infographics
- Lists
- Podcasts
- Spotlight
- Summaries
- The Forum
- Top Questions
- #WTFact
- 100 Women
- Britannica Kids
- Saving Earth
- Space Next 50
- Student Center
Executive Programs
The 20-month program teaches the science of management to mid-career leaders who want to move from success to significance.
A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.
A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.
Non-degree programs for senior executives and high-potential managers.
A non-degree, customizable program for mid-career professionals.
How to Spot a Deepfake
With the wide availability of deepfake generation tools, it’s important that everyone has a basic understanding of how to spot a deepfake. There are deepfake detection tools available to the public, but these don’t always identify deepfakes. As a result, it’s important to be aware of the ways to detect a deepfake:
- Unnatural face, environment or lighting: Deepfake images or sections of videos can have unnatural facial expressions, facial feature placement or jagged edges. The environment itself (such as the lighting) can also be unrealistic.
- Unnatural behavior: In deepfake videos there must be continuity between images, but this is difficult to implement. As a result, you might spot unnatural behaviors such as uneven blinking or choppy motion.
- Image artifacts and blurriness: Deepfake images may have weird artifacts such as blurriness around the neck where the body of one person is stitched together with the face of another.
- Audio: When deepfakes are combined with audio, the lips may follow an unexpected motion compared to what you would expect from the audio.
How are deepfakes used?
Deepfake technology has historically been used for illicit purposes, including to generate non-consensual pornography. The FBI released a public service announcement in June 2023 warning the public about the dangers of generative AI, and how it’s used for «Explicit Content Creation,» «Sextortion,» and «Harassment.»
In 2017, a reddit user named «deepfakes» created a forum for porn that featured face-swapped actors. Since that time, porn (particularly revenge porn) has repeatedly made the news, severely damaging the reputation of celebrities and prominent figures. According to a Deeptrace report, pornography made up 96% of deepfake videos found online in 2019.
Deepfakes have also been used for non-sexual criminal activity, including one instance in 2023 that involved the use of deepfake technology to mimic the voice of a woman’s child to threaten and extort her.
Deepfake video has also been used in politics. In 2018, for example, a Belgian political party released a video of Donald Trump giving a speech calling on Belgium to withdraw from the Paris climate agreement. Trump never gave that speech, however – it was a deepfake. That was not the first use of a deepfake to create misleading videos, and tech-savvy political experts are bracing for a future wave of fake news that features convincingly realistic deepfakes.
But journalists, human rights groups, and media technologists have also found positive uses for the technology. For instance, the 2020 HBO documentary «Welcome to Chechnya» used deepfake technology to hide the identities of Russian LGBTQ refugees whose lives were at risk while also telling their stories.
WITNESS, an organization focused on the use of media to defend human rights, has expressed optimism around the technology when used in this way, while also recognizing digital threats.
«Part of our work is really exploring the positive use of that technology, from protecting people like activists on video, to taking advocacy approaches, to doing political satire,» said shirin anlen, a media technologist for WITNESS.
For anlen and WITNESS, the technology isn’t something to be entirely feared. Instead, it should be seen as a tool. «It’s building on top of a long term relationship we have had with audiovisuals. We’ve already been manipulating audio. We’ve already been manipulating visuals in different ways,» anlen said.
Experts like anlen and López believe that the best approach the public can take to deepfakes is not to panic, but to be informed about the technology and its capabilities.
Advertisement
Use generative AI tools responsibly
In its early phase, AI can be unreliable and even risky. But it’s also fun and interesting to experiment with. And like it or not, generative AI tools are being integrated into all kinds of software, from email and search to Google Docs, Microsoft Office, Zoom, Expedia, and Snapchat.
Playing around with chatbots and image generators is a good way to learn more about how the technology works and what it can and can’t do.
«My main piece of advice to everybody is, do use this stuff,» says Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School. «You absolutely should be making things. You should absolutely spend an hour on ChatGPT. You should try and automate your job.»
Mollick requires his students to use AI. And while he’s an enthusiastic user of chatbots and other forms of AI, he’s also wary of the ways they can be misused.
«You’ve got to figure this thing out because we’re in a world where there’s nobody with great advice right now. There isn’t like a manual out there that you can read,» Mollick says.
If you are going to experiment with generative AI, here are a few things to keep in mind.
- Privacy: Be smart about sharing personal information with AI software. Systems may use your input for training, and companies may have access to what you enter as inputs.
- Ethics: What are you using the software to create? Are you asking an image generator to copy the style of a living artist, for example? Or using it in a class without your teacher’s knowledge?
- Consent: If you’re creating an image, who are you depicting? Is it parody? Could they be harmed by the portrayal?
- Disclosure: If you’re sharing your AI creations on social media, have you made it clear they are computer-generated? What would happen if they were shared further without that disclosure?
- Fact check: As explained above, chatbots get things wrong. So double-check any important information before you post or share it.
Sponsor Message
«You can think of it as like an infinitely helpful intern with access to all of human knowledge who makes stuff up every once in a while,» Mollick says.
The audio portion of this episode was produced by Thomas Lu and edited by Brett Neely and Meghan Keane.
- Life Kit: Life Skills
Technology
«A bad actor can take one of these tools . and use this to make unimaginable amounts of really plausible, almost terrifying misinformation that the average person is not going to recognize as misinformation,» Marcus said.
«That may be complete with data, fake references to studies that haven’t even existed before. And not just one story like this, which a human could write, but thousands or millions or billions, because you can automate these things.»
Deepfake
404 Not Found
Check the page address or search for it below.
- Terms Of Use
- About the BBC
- Privacy Policy
- Cookies
- Accessibility Help
- Parental Guidance
- Contact the BBC
- Get Personalised Newsletters
Copyright © BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
DOJ can’t force eBay to remove environmentally harmful product listings.
Ashley Belanger – 10/1/2024
He takes his hobbies seriously.
Nate Anderson – 10/1/2024
Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important.
- About Us
- Staff Directory
- Newsletters
- Ars Videos
- General FAQ
- RSS Feeds
- Contact us
- Advertise with us
- Reprints
© 2024 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Ars Technica Addendum and Your California Privacy Rights. Ars Technica may earn compensation on sales from links on this site. Read our affiliate link policy. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
Источники:
https://www.newsday.com/news/nation/california-deepfake-election-law-w98156&rut=4e3b65461b21aaf57b96b9938496e0867b3f784453999ad3975aa539ddb7d1a6
https://www.britannica.com/technology/deepfake&rut=2fd8dda029e1626b5ba49b0605ff83ba87edf5070967b08482e6f0fe76d96dd2
https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained&rut=4768bfcbd3f5b40760735f887a99c55ac8a44181df1a6b409f528ec6ce4e27ed
https://builtin.com/machine-learning/deepfake&rut=8eb801b90011fcdc9a60b9da1f79dc5a1511f647d2432f35361b4e9e200b39f0
https://www.businessinsider.com/guides/tech/what-is-deepfake?op=1&rut=dedc21e8f6e7c706aa7ab4372876d9a9d970356b905597cc575292afcf591b1c
https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images&rut=1c223d9e2b602f955e25737223ac3bc218cb610cd0d1552bde510760eb0f5c06
https://www.npr.org/2023/03/23/1165146797/it-takes-a-few-dollars-and-8-minutes-to-create-a-deepfake-and-thats-only-the-sta&rut=00a6543b02e53950a25d44a3bdccd5880b2d99d3a4746124e636e888c61ad29b
https://www.bbc.com/news/articles/ckg9k5dv1zdo&rut=0a4b924487b2c2e40c87cb35057be6e5a22d0ef9d1b0abdc967a807656e9c148
https://arstechnica.com/tech-policy/2024/10/ai-doesnt-abolish-right-to-roast-govt-judge-blocking-calif-deepfake-law-says/&rut=beb1285eb16b34401a14dc67868dc53255d84cd7b6702fdb1acab8af00ca79ac