CPU Deepfake Tutorial (No Graphics Card Required!)

Deepfakery
14 Sept 202008:29

TLDRThis tutorial guides viewers on creating deepfake videos using only a CPU, without the need for a graphics card. It utilizes DeepFaceLab 2.0 and involves steps like downloading and installing the software, extracting images from videos, creating face sets, viewing and refining these sets, training the deepfake model, and finally merging the faces to produce the final video. The process is detailed, emphasizing optimization for CPU usage and providing tips for enhancing the quality of the deepfake.

Takeaways

  • πŸ˜€ This tutorial teaches creating deepfake videos using only a CPU, without the need for a graphics card.
  • πŸ’» The tutorial uses DeepFaceLab 2.0, build 8 2 2020, and requires a Windows PC with other applications closed to free up CPU resources.
  • πŸ“‚ Download and install DeepFaceLab from GitHub, with no setup required after extraction.
  • πŸ“ The 'workspace' folder organizes image and model files, with 'data_src' as the source video and 'data_dst' as the destination video.
  • πŸ“Έ Step 2 involves extracting images from videos at a specified frames per second (FPS) to optimize processing time and file size.
  • πŸ” Extract face sets in Step 3 by choosing face sizes and image dimensions, which impacts the data package size and quality.
  • πŸ‘€ View and edit face sets in Step 4 to remove unwanted or unusable faces to refine the deepfake model.
  • πŸ€– Step 5 is training the deepfake model, where the model learns to map faces using the extracted images.
  • 🎭 The merging step (Step 6) combines the trained model with the destination video to create the final deepfake video.
  • πŸ“Ή The final step involves converting the merged frames back into a video file, preserving the destination audio.
  • ⏭️ The tutorial concludes with a reminder to experiment with training and merging settings to achieve desired deepfake results.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is creating deepfake videos using only a CPU, without the need for a graphics card.

  • Which software is used in the tutorial?

    -DeepFaceLab 2.0 build 8 2 2020 is used in the tutorial.

  • What are the system requirements for running DeepFaceLab as per the tutorial?

    -The system requirements include access to a Windows PC and the need to close all other applications that use CPU resources.

  • How can one obtain DeepFaceLab?

    -DeepFaceLab can be obtained by visiting github.com/iprov/deepfacelab, scrolling down to the 'Releases' section, and selecting either the torrent magnet link or downloading from mega.nz.

  • What is the purpose of the 'workspace' folder in DeepFaceLab?

    -The 'workspace' folder in DeepFaceLab holds the images and trained model files for the deepfake process.

  • How does one extract images from a video in DeepFaceLab?

    -One extracts images from a video by double-clicking on the file labeled '2 extract images from video data src' and entering the frames per second for the extraction.

  • What is the significance of the 'data_src' and 'data_dst' folders?

    -The 'data_src' folder contains the source video files, and the 'data_dst' folder contains the destination video files used to produce the face sets for the deepfake.

  • How can one adjust the number of faces extracted from each image?

    -One can adjust the number of faces extracted from each image by typing a number when prompted during the face extraction process, or pressing enter to extract every face.

  • What is the purpose of the training step in creating a deepfake?

    -The training step involves loading image files and running iterations to create a deepfake model that can be used to merge faces in a video.

  • How does one view and potentially remove unwanted faces from the face sets?

    -One can view and remove unwanted faces by using the files labeled '4.1 data src view aligned result' and '5.1 data dst view aligned results'.

  • What is the final step to create the deepfake video?

    -The final step is to merge the new deepfake frames into a video file with the destination audio by double-clicking the file labeled '8 merged to mp4'.

Outlines

00:00

πŸ–₯️ Deep Fake Video Creation with CPU

This paragraph introduces a tutorial on creating deep fake videos using only a CPU, without the need for a graphics card. The tutorial utilizes Deep Face Lab 2.0, build 8 2 2020. It advises closing other CPU-intensive applications and provides a step-by-step guide starting from downloading and installing Deep Face Lab from GitHub. The workspace folder structure is explained, detailing where to place source and destination video files. The process includes extracting images from videos, selecting appropriate frame rates, and choosing image formats. It also covers extracting face sets, adjusting face detection settings, and viewing the extracted face sets to remove unwanted images.

05:02

πŸŽ₯ Training and Merging Faces for Deep Fakes

The second paragraph delves into the training process of the deep fake model using the '6 train quick 96' file in Deep Face Lab. It describes how to initiate training, monitor progress through a preview window, and save the model. The tutorial then moves on to merging the trained faces onto the destination video, using the '7 merge quick 96' file. It explains the interactive merger process, where users can adjust erode and blur mask values for a more realistic deep fake. The final step involves merging the deep fake frames into a video file with the destination audio. The paragraph concludes with a reminder that the training can be restarted to improve the quality and encourages experimentation with merger settings.

Mindmap

Keywords

πŸ’‘Deepfake

A deepfake refers to a synthetic media in which a person's likeness is superimposed onto another person's body in a video, creating a convincing, but false, appearance. In the context of the video, deepfakes are created using a CPU without the need for a graphics card, showcasing the accessibility of this technology. The script describes a step-by-step process for generating deepfake videos, emphasizing the software's capability to manipulate visual content.

πŸ’‘CPU

CPU stands for Central Processing Unit, which is the primary component of a computer that performs most of the processing inside the computer. The video's title highlights that the tutorial does not require a Graphics Card, suggesting that the CPU alone is sufficient for the deepfake creation process. This is significant as it lowers the hardware requirements for such tasks, making it more accessible to a broader audience.

πŸ’‘Deep Face Lab

Deep Face Lab is a software application mentioned in the script that is used for creating deepfake videos. The video specifically refers to version 2.0, build 8 2 2020, indicating the software's version and build number. It is used throughout the tutorial as the primary tool for generating the deepfake, demonstrating its features and capabilities in a practical, step-by-step manner.

πŸ’‘Quick 96 preset trainer

The 'Quick 96 preset trainer' is a setting within Deep Face Lab that is optimized for faster training of the deepfake model. The script mentions using this preset with lowered settings suitable for CPU-only training, which is crucial for the video's theme of creating deepfakes without a graphics card. This preset allows for efficient use of CPU resources to achieve the desired deepfake effect.

πŸ’‘Source video

The source video refers to the original video from which images are extracted to create the deepfake. In the script, 'data_src' is the folder containing the source video, which is used to produce the face sets necessary for the deepfake process. The video explains how to extract images from this source video, which are then used to train the deepfake model.

πŸ’‘Destination video

The destination video is the video onto which the deepfake face will be superimposed. In the script, 'data_dst' represents the folder containing the destination video. The tutorial describes how to extract face sets from this video, which will be merged with the source video to create the final deepfake, demonstrating the process of combining two different video sources.

πŸ’‘Face sets

Face sets are collections of facial images extracted from videos that are used to train the deepfake model to recognize and replicate facial features. The script outlines the process of extracting face sets from both the source and destination videos, which is a critical step in ensuring the deepfake appears realistic and accurate.

πŸ’‘Training

Training in the context of the video refers to the process of teaching the deepfake model to map the source face onto the destination face accurately. The script details the steps involved in training, including the use of the '6 train quick 96' command in Deep Face Lab, which initiates the model's learning process to generate a convincing deepfake.

πŸ’‘Merging

Merging is the final step in creating a deepfake video where the trained model's output is combined with the destination video to produce the final fake video. The script explains how to use the '7 merge quick 96' command to merge the faces and create the deepfake video, highlighting the interactive merger's role in fine-tuning the deepfake's appearance.

πŸ’‘FPS (Frames Per Second)

FPS is a measure of the number of individual frames that are displayed per second in a video. The script mentions entering a specific FPS value when extracting images from videos, which affects the number of images generated. A lower FPS results in fewer images, which can be beneficial for processing long videos or for reducing the computational load on the CPU.

πŸ’‘Interactive merger

The interactive merger is a feature within Deep Face Lab that allows users to manually adjust the deepfake's facial mask and blur values to fine-tune the final output. The script describes using keyboard commands to modify the erode and blur mask values, which helps in achieving a more seamless integration of the source face onto the destination video.

Highlights

Learn to create deep fake videos using only a CPU, with no graphics card required.

The tutorial uses Deep Face Lab 2.0, build 8 2 2020 for creating deep fakes.

Ensure to close all other applications using CPU resources before starting.

Access to a Windows PC is necessary for this tutorial.

Deep Face Lab's quick 96 preset trainer is used with settings lowered for CPU-only training.

Download and install Deep Face Lab from GitHub by iprov.

No setup is required for Deep Face Lab; it's ready to use once extracted.

The workspace folder contains subfolders for images and trained model files.

Custom deep fakes can be made by moving video clips into the designated folders.

Extract images from video using a specified frames per second rate.

Lowering the FPS is recommended for extremely long source videos.

Face sets are extracted for use in the deep fake process.

CPU is selected by default if no compatible GPU is installed.

Image dimensions and quality can be adjusted to affect the data package size.

View and edit face sets to remove unwanted faces or errors.

Training begins with a short model name and selecting the CPU for processing.

The training accuracy is indicated by a graph and preview window.

The training process can be saved and restarted at any time.

Merging faces creates the final deep fake video.

The interactive merger allows for adjustments to the erode and blur mask values.

Final deep fake frames are merged into a video file with destination audio.

The result is a deep fake video that can be viewed and shared.

Quality of the deep fake can be improved by restarting the training.

Experiment with merger settings to achieve desired results.