# deepface **Repository Path**: IT_xiaocao/deepface ## Basic Information - **Project Name**: deepface - **Description**: github上的serengil/deepface 国内镜像 - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2025-02-26 - **Last Updated**: 2025-07-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # deepface
[![Downloads](https://static.pepy.tech/personalized-badge/deepface?period=total&units=international_system&left_color=grey&right_color=blue&left_text=downloads)](https://pepy.tech/project/deepface) [![Stars](https://img.shields.io/github/stars/serengil/deepface?color=yellow&style=flat&label=%E2%AD%90%20stars)](https://github.com/serengil/deepface/stargazers) [![License](http://img.shields.io/:license-MIT-green.svg?style=flat)](https://github.com/serengil/deepface/blob/master/LICENSE) [![Tests](https://github.com/serengil/deepface/actions/workflows/tests.yml/badge.svg)](https://github.com/serengil/deepface/actions/workflows/tests.yml) [![DOI](http://img.shields.io/:DOI-10.17671/gazibtd.1399077-blue.svg?style=flat)](https://doi.org/10.17671/gazibtd.1399077) [![Blog](https://img.shields.io/:blog-sefiks.com-blue.svg?style=flat&logo=wordpress)](https://sefiks.com) [![YouTube](https://img.shields.io/:youtube-@sefiks-red.svg?style=flat&logo=youtube)](https://www.youtube.com/@sefiks?sub_confirmation=1) [![Twitter](https://img.shields.io/:follow-@serengil-blue.svg?style=flat&logo=x)](https://twitter.com/intent/user?screen_name=serengil) [![Support me on Patreon](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Fshieldsio-patreon.vercel.app%2Fapi%3Fusername%3Dserengil%26type%3Dpatrons&style=flat)](https://www.patreon.com/serengil?repo=deepface) [![GitHub Sponsors](https://img.shields.io/github/sponsors/serengil?logo=GitHub&color=lightgray)](https://github.com/sponsors/serengil) [![Buy Me a Coffee](https://img.shields.io/badge/-buy_me_a%C2%A0coffee-gray?logo=buy-me-a-coffee)](https://buymeacoffee.com/serengil) serengil%2Fdeepface | Trendshift

DeepFace 是 Python 的轻量级 [人脸识别](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) 和人脸属性分析 ([年龄](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [性别](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [情感](https://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/) 和 [种族](https://sefiks.com/2019/11/11/race-and-ethnicity-prediction-in-keras/)) 框架. 它是一个混合人脸识别框架,包装了最先进的模型: [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/), [`FaceNet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/), [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/), [`DeepFace`](https://sefiks.com/2020/02/17/face-recognition-with-facebook-deepface-in-keras/), [`DeepID`](https://sefiks.com/2020/06/16/face-recognition-with-deepid-in-keras/), [`ArcFace`](https://sefiks.com/2020/12/14/deep-face-recognition-with-arcface-in-keras-and-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), `SFace` 和 `GhostFaceNet`. [`实验`](https://github.com/serengil/deepface/tree/master/benchmarks) 表明,人类在面部识别任务上的准确率为 97.53%,而这些模型已经达到并超过了该准确率水平。 ## 安装 [![PyPI](https://img.shields.io/pypi/v/deepface.svg)](https://pypi.org/project/deepface/) 安装 deepface 的最简单方法是从 [`PyPI`](https://pypi.org/project/deepface/) 下载. 它还将安装库本身及其先决条件。 ```shell $ pip install deepface ``` 或者,您也可以从其源代码安装 deepface。源代码可能具有尚未在 pip 版本中发布的新功能。 ```shell $ git clone https://github.com/serengil/deepface.git $ cd deepface $ pip install -e . ``` 安装库后,您将能够导入它并使用其功能。 ```python from deepface import DeepFace ``` **现代面部识别管道 ** - [`演示`](https://youtu.be/WnUVYQP4h44) 现代 [**人脸识别管道**](https://sefiks.com/2020/05/01/a-gentle-introduction-to-face-recognition-in-deep-learning/)由 5 个常见阶段组成: [检测](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [对齐](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [规范化](https://sefiks.com/2020/11/20/facial-landmarks-for-face-recognition-with-dlib/), [表示](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) 和 [验证](https://sefiks.com/2020/05/22/fine-tuning-the-threshold-in-face-recognition/). 虽然 DeepFace 在后台处理所有这些常见阶段,但您不需要深入了解其背后的所有过程。您只需使用一行代码即可调用其 verification、find 或 analysis 函数。 **人脸验证** - [`演示`](https://youtu.be/KRCvkNCOphE) 此功能将人脸对验证为同一个人还是不同人员。它需要精确的图像路径作为输入。也欢迎传递 numpy 或 base64 编码的图像。然后,它将返回一个字典,您应该只检查其经过验证的密钥。 ```python result = DeepFace.verify( img1_path = "img1.jpg", img2_path = "img2.jpg", ) ```

**人脸识别** - [`演示`](https://youtu.be/Hrjp-EStM_s) [人脸识别](https://sefiks.com/2020/05/25/large-scale-face-recognition-for-deep-learning/) 需要多次应用人脸验证。在这里,deepface 有一个开箱即用的 find 函数来处理这个动作。它将在数据库路径中查找输入图像的标识,并将返回 pandas 数据帧列表作为输出。同时,面部数据库的面部嵌入存储在 pickle 文件中,以便下次更快地搜索。结果将是源图像中出现的人脸的大小。此外,数据库中的目标图像也可以有许多面孔。 ```python dfs = DeepFace.find( img_path = "img1.jpg", db_path = "C:/workspace/my_db", ) ```

**嵌入** - [`演示`](https://youtu.be/OYialFo7Qo4) 人脸识别模型基本上将人脸图像表示为多维向量。有时,你直接需要那些 embedding vectors。DeepFace 带有专用的表示功能。Represent 函数返回嵌入列表。结果将是图像路径中出现的面的大小。 ```python embedding_objs = DeepFace.represent( img_path = "img.jpg" ) ``` 此函数返回一个数组作为嵌入。嵌入数组的大小会因模型名称而异。例如,VGG-Face 是默认模型,它将面部图像表示为 4096 维向量。 ```python for embedding_obj in embedding_objs: embedding = embedding_obj["embedding"] assert isinstance(embedding, list) assert ( model_name == "VGG-Face" and len(embedding) == 4096 ) ``` 在这里,嵌入也用 4096 个槽水平 [绘制](https://sefiks.com/2020/05/01/a-gentle-introduction-to-face-recognition-in-deep-learning/)。每个插槽对应于嵌入向量中的一个维度值,维度值在右侧的颜色条中进行了说明。与 2D 条形码类似,垂直尺寸在图中不存储任何信息。

**人脸识别模型** - [`演示`](https://youtu.be/eKOZawGR3y0) DeepFace 是一个 **混合**人脸识别包。它目前包装了许多**最先进的** 人脸识别模型: [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) , [`FaceNet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/), [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/), [`DeepFace`](https://sefiks.com/2020/02/17/face-recognition-with-facebook-deepface-in-keras/), [`DeepID`](https://sefiks.com/2020/06/16/face-recognition-with-deepid-in-keras/), [`ArcFace`](https://sefiks.com/2020/12/14/deep-face-recognition-with-arcface-in-keras-and-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), `SFace` 和 `GhostFaceNet`. 默认配置使用 VGG-Face 模型。 ```python models = [ "VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib", "SFace", "GhostFaceNet" ] #face verification result = DeepFace.verify( img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[0], ) #face recognition dfs = DeepFace.find( img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1], ) #embeddings embedding_objs = DeepFace.represent( img_path = "img.jpg", model_name = models[2], ) ```

FaceNet、VGG-Face、ArcFace 和 Dlib 在实验中表现优异 - 有关详细信息,请参阅 [`BENCHMARKS`](https://github.com/serengil/deepface/tree/master/benchmarks) 。您可以在下表中找到 DeepFace 中各种模型的测量分数以及其原始研究的报告分数。 | Model | 实测分数 | 公布的分数 | | -------------- |-------| ------------------ | | Facenet512 | 98.4% | 99.6% | | Human-beings | 97.5% | 97.5% | | Facenet | 97.4% | 99.2% | | Dlib | 96.8% | 99.3 % | | VGG-Face | 96.7% | 98.9% | | ArcFace | 96.7% | 99.5% | | GhostFaceNet | 93.3% | 99.7% | | SFace | 93.0% | 99.5% | | OpenFace | 78.7% | 92.9% | | DeepFace | 69.0% | 97.3% | | DeepID | 66.5% | 97.4% | 由于采用了不同的检测或归一化技术,在 DeepFace 中使用这些模型进行实验可能会揭示与原始研究相比的差异。此外,一些模型仅发布其主干,缺乏预训练的权重。因此,我们正在利用它们的重新实现,而不是原始的预训练权重。 **相似性** - [`Demo`](https://youtu.be/1EPoS69fHOc) 人脸识别模型是常规的 [卷积神经网络](https://sefiks.com/2018/03/23/convolutional-autoencoder-clustering-images-with-neural-networks/) ,它们负责将人脸表示为向量。我们期望同一个人的面孔对应该比不同人的面孔对 [更相似r](https://sefiks.com/2020/05/22/fine-tuning-the-threshold-in-face-recognition/)。 相似度可以通过不同的指标来计算,例如 [余弦相似度](https://sefiks.com/2018/08/13/cosine-similarity-in-machine-learning/)、欧几里得距离或 L2 归一化欧几里得。默认配置使用余弦相似性。根据[实验](https://github.com/serengil/deepface/tree/master/benchmarks),没有距离指标的表现优于其他指标。 ```python metrics = ["cosine", "euclidean", "euclidean_l2"] #face verification result = DeepFace.verify( img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1], ) #face recognition dfs = DeepFace.find( img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[2], ) ``` **面部属性分析** - [`Demo`](https://youtu.be/GT2UeN85BdA) DeepFace 还自带了强大的面部属性分析模块,包括 [`年龄`](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [`性别`](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [`面部表情`](https://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/) (包括愤怒、恐惧、中立、悲伤、厌恶、快乐和惊喜)和 [`种族`](https://sefiks.com/2019/11/11/race-and-ethnicity-prediction-in-keras/) (包括亚洲、白人、中东、印度、拉丁和黑人)预测。结果将是源图像中出现的人脸的大小。 ```python objs = DeepFace.analyze( img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'], ) ```

年龄模型± 4.65 MAE;性别模型获得了 97.44% 的准确率、96.29% 的准确率和 95.05% 的召回率,如 [其教程](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/)中所述。 **人脸检测和对齐** - [`Demo`](https://youtu.be/GZ2p2hj2H5k) 人脸检测和对齐是现代人脸识别管道的重要早期阶段。[实验](https://github.com/serengil/deepface/tree/master/benchmarks) 表明,检测可将人脸识别准确率提高多达 42%,而对齐可将人脸识别准确率提高多达 6%。 [`OpenCV`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [`Ssd`](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), [`MtCnn`](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/), `Faster MtCnn`, [`RetinaFace`](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/), [`MediaPipe`](https://sefiks.com/2022/01/14/deep-face-detection-with-mediapipe/), `Yolo`, `YuNet` 和 `CenterFace` 都封装在 deepface 中。

所有 deepface 函数都接受可选的检测器后端并对齐输入参数。您可以使用这些参数在这些检测器和对齐模式之间切换。OpenCV 是默认检测器,默认情况下,对齐处于打开状态。 ```python backends = [ 'opencv', 'ssd', 'dlib', 'mtcnn', 'fastmtcnn', 'retinaface', 'mediapipe', 'yolov8', 'yolov11s', 'yolov11n', 'yolov11m', 'yunet', 'centerface', ] alignment_modes = [True, False] #face verification obj = DeepFace.verify( img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[0], align = alignment_modes[0], ) #face recognition dfs = DeepFace.find( img_path = "img.jpg", db_path = "my_db", detector_backend = backends[1], align = alignment_modes[0], ) #embeddings embedding_objs = DeepFace.represent( img_path = "img.jpg", detector_backend = backends[2], align = alignment_modes[0], ) #facial analysis demographies = DeepFace.analyze( img_path = "img4.jpg", detector_backend = backends[3], align = alignment_modes[0], ) #face detection and alignment face_objs = DeepFace.extract_faces( img_path = "img.jpg", detector_backend = backends[4], align = alignment_modes[0], ) ``` 人脸识别模型实际上是 CNN 模型,它们需要标准大小的输入。因此,在表示之前需要调整大小。为避免变形,deepface 在检测和对齐后根据 target size 参数添加黑色填充像素。

[RetinaFace](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/) 和 [MtCnn](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/) 在检测和对齐阶段似乎表现优异,但它们的速度要慢得多。如果您的管道速度更重要,那么您应该使用 opencv 或 ssd。另一方面,如果您考虑准确性,那么您应该使用 retinaface 或 mtcnn。 即使在人群中,RetinaFace 的性能也非常令人满意,如下图所示。此外,它还具有令人难以置信的面部特征点检测性能。突出显示的红点显示一些面部特征点,例如眼睛、鼻子和嘴巴。这就是为什么 RetinaFace 的对齐分数也很高。


黄天使 - 费内巴切女子排球队

您可以在此 [存储库](https://github.com/serengil/retinaface)中找到有关 RetinaFace 的更多信息。 **实时分析** - [`Demo`](https://youtu.be/-c9sSJcx6wI) 您也可以为实时视频运行 deepface。Stream 函数将访问您的网络摄像头并应用人脸识别和人脸属性分析。如果该函数可以连续聚焦人脸 5 帧,则该函数开始分析帧。然后,它显示结果 5 秒。 ```python DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database") ```

尽管人脸识别基于一次性学习,但您也可以使用一个人的多张人脸图片。您应该重新排列目录结构,如下所示。 ```bash user ├── database │ ├── Alice │ │ ├── Alice1.jpg │ │ ├── Alice2.jpg │ ├── Bob │ │ ├── Bob.jpg ``` **React UI** - [`演示部分-i`](https://youtu.be/IXoah6rhxac), [`演示部分-ii`](https://youtu.be/_waBA-cH2D4) 如果您打算直接从浏览器执行人脸验证任务,[deepface-react-ui](https://github.com/serengil/deepface-react-ui) 是一个使用 ReactJS 构建的单独存储库,具体取决于 deepface api。

**人脸防欺骗** - [`Demo`](https://youtu.be/UiK1aIjOBlQ) DeepFace 还包括一个反欺骗分析模块,用于了解给定图像是真是假。要激活此功能,请在任何 DeepFace 任务中将参数设置为 True。anti_spoofing

```python # anti spoofing test in face detection face_objs = DeepFace.extract_faces( img_path="dataset/img1.jpg", anti_spoofing = True ) assert all(face_obj["is_real"] is True for face_obj in face_objs) # anti spoofing test in real time analysis DeepFace.stream( db_path = "C:/User/Sefik/Desktop/database", anti_spoofing = True ) ``` **API** - [`Demo`](https://youtu.be/HeKCQ6U9XmI) DeepFace 也提供 API - 有关详细信息,请参阅 [`api folder`](https://github.com/serengil/deepface/tree/master/deepface/api/src) 。您可以克隆 deepface 源代码并使用以下命令运行 api。它将使用 gunicorn 服务器来启动 rest 服务。通过这种方式,您可以从外部系统(如移动应用程序或 Web)调用 deepface。 ```shell cd scripts ./service.sh ```

人脸识别、面部属性分析和向量表示功能已包含在API中。您需要将这些功能作为HTTP POST方法调用。默认的服务端点如下: 人脸识别:http://localhost:5005/verify 面部属性分析:http://localhost:5005/analyze 向量表示:http://localhost:5005/represent 该API支持将图像作为文件上传(通过表单数据),或者通过精确的图像路径、URL或Base64编码的字符串(通过JSON或表单数据)进行传输,提供了多种选项以满足不同客户端的需求。您可以在[`这里`](https://github.com/serengil/deepface/tree/master/deepface/api/postman) 找到一个Postman项目,以了解如何调用这些方法。 **Dockerized Service** - [`Demo`](https://youtu.be/9Tk9lRQareA) [![Docker 服务](https://img.shields.io/docker/pulls/serengil/deepface?logo=docker)](https://hub.docker.com/r/serengil/deepface) 以下命令集将通过 docker 为 deepface 提供服务。然后,您将能够使用 deepface 服务,例如 verify、analyze 和 represent。此外,如果您想自己构建镜像,而不是从 docker hub 构建预构建的镜像,[Dockerfile](https://github.com/serengil/deepface/blob/master/Dockerfile) 位于项目的根文件夹中。localhost:5005 ```shell # docker build -t serengil/deepface . # build docker image from Dockerfile docker pull serengil/deepface # use pre-built docker image from docker hub docker run -p 5005:5000 serengil/deepface ```

**命令行界面** - [`Demo`](https://youtu.be/PKKTAr3ts2s) DeepFace 还带有一个命令行界面。您可以在命令行中访问其功能,如下所示。命令 deepface 需要函数名称作为第一个参数,然后是函数参数。 ```shell #face verification $ deepface verify -img1_path tests/dataset/img1.jpg -img2_path tests/dataset/img2.jpg #facial analysis $ deepface analyze -img_path tests/dataset/img1.jpg ``` 如果您使用 docker 运行 deepface,也可以运行这些命令。请按照 [shell 脚本](https://github.com/serengil/deepface/blob/master/scripts/dockerize.sh#L17)中的说明进行作。 **大规模面部识别** - [`Playlist`](https://www.youtube.com/playlist?list=PLsS_1RYmYQQGSJu_Z3OVhXhGmZ86_zuIm) 如果您的任务需要在大型数据集上进行面部识别,则应将 DeepFace 与向量索引或向量数据库结合使用。此设置将执行[近似最近邻](https://youtu.be/c10w0Ptn_CU) 搜索,而不是精确搜索,从而使您能够在几毫秒内识别包含数十亿个条目的数据库中的人脸。常见的向量索引方案包括 [Annoy](https://youtu.be/Jpxm914o2xk), [Faiss](https://youtu.be/6AmEvDTKT-k), [Voyager](https://youtu.be/2ZYTV9HlFdU), [NMSLIB](https://youtu.be/EVBhO8rbKbg), [ElasticSearch](https://youtu.be/i4GvuOmzKzo).对于矢量数据库,流行的选项是 [Postgres with its pgvector extension](https://youtu.be/Xfv4hCWvkp0) 和 [RediSearch](https://youtu.be/yrXlS0d6t4w).

相反,如果您的任务涉及中小型数据库上的面部识别,则可以采用使用关系数据库 [Postgres](https://youtu.be/f41sLxn1c0k) 或 [SQLite](https://youtu.be/_1ShBeWToPg), 或 NoSQL databases 如 [Mongo](https://youtu.be/dmprgum9Xu8), [Redis](https://youtu.be/X7DSpUMVTsw) 或 [Cassandra](https://youtu.be/J_yXpc3Y8Ec) 来执行精确的最近邻搜索。 ## 贡献 我们非常欢迎 Pull Request!如果您打算贡献一个大型补丁,请先创建一个 issue,以便先解决任何前期问题或设计决策。 在创建 PR 之前,您应该通过运行 command 在本地运行单元测试和 linting。发送 PR 后,GitHub 测试工作流将自动运行,并且在批准之前,单元测试和 linting 作业将在 [GitHub 任务](https://github.com/serengil/deepface/actions) 中可用. ## 支持 支持项目的方法有很多种 - 为 GitHub 存储库加星标⭐️只是其中之一 🙏 如果您确实喜欢这项工作,则可以在 [Patreon](https://www.patreon.com/serengil?repo=deepface), [GitHub Sponsors](https://github.com/sponsors/serengil) 或 [Buy Me a Coffee](https://buymeacoffee.com/serengil)上为其提供财务支持。此外,如果您成为金牌、银牌或铜牌赞助商,贵公司的徽标将显示在 GitHub 和 PyPI 上的 README 上。 ## 引文 如果 deepface 对你的研究有帮助,请在你的出版物中引用它 - 参见 [`CITATIONS`](https://github.com/serengil/deepface/blob/master/CITATION.md) 了解更多详情。以下是它的 BibTex 条目: 如果您在研究中将 deepface 用于面部识别或面部检测目的,请引用以下出版物: ```BibTeX @article{serengil2024lightface, title = {A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules}, author = {Serengil, Sefik and Ozpinar, Alper}, journal = {Journal of Information Technologies}, volume = {17}, number = {2}, pages = {95-107}, year = {2024}, doi = {10.17671/gazibtd.1399077}, url = {https://dergipark.org.tr/en/pub/gazibtd/issue/84331/1399077}, publisher = {Gazi University} } ``` ```BibTeX @inproceedings{serengil2020lightface, title = {LightFace: A Hybrid Deep Face Recognition Framework}, author = {Serengil, Sefik Ilkin and Ozpinar, Alper}, booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)}, pages = {23-27}, year = {2020}, doi = {10.1109/ASYU50717.2020.9259802}, url = {https://ieeexplore.ieee.org/document/9259802}, organization = {IEEE} } ``` 另一方面,如果您在研究中使用 deepface 进行面部属性分析,例如年龄、性别、情绪或种族预测任务,请引用此出版物。 ```BibTeX @inproceedings{serengil2021lightface, title = {HyperExtended LightFace: A Facial Attribute Analysis Framework}, author = {Serengil, Sefik Ilkin and Ozpinar, Alper}, booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)}, pages = {1-4}, year = {2021}, doi = {10.1109/ICEET53442.2021.9659697}, url = {https://ieeexplore.ieee.org/document/9659697}, organization = {IEEE} } ``` 此外,如果您在 GitHub 项目中使用 deepface,请在 .deepfacerequirements.txt ## Licence DeepFace is licensed under the MIT License - see [`LICENSE`](https://github.com/serengil/deepface/blob/master/LICENSE) for more details. DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md) (both 128d and 512d), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE) and [GhostFaceNet](https://github.com/HamadYA/GhostFaceNets/blob/main/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Similarly, DeepFace wraps many face detectors: [OpenCv](https://github.com/opencv/opencv/blob/4.x/LICENSE), [Ssd](https://github.com/opencv/opencv/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/LICENSE.txt), [MtCnn](https://github.com/ipazc/mtcnn/blob/master/LICENSE), [Fast MtCnn](https://github.com/timesler/facenet-pytorch/blob/master/LICENSE.md), [RetinaFace](https://github.com/serengil/retinaface/blob/master/LICENSE), [MediaPipe](https://github.com/google/mediapipe/blob/master/LICENSE), [YuNet](https://github.com/ShiqiYu/libfacedetection/blob/master/LICENSE), [Yolo](https://github.com/derronqi/yolov8-face/blob/main/LICENSE) and [CenterFace](https://github.com/Star-Clouds/CenterFace/blob/master/LICENSE). Finally, DeepFace is optionally using [face anti spoofing](https://github.com/minivision-ai/Silent-Face-Anti-Spoofing/blob/master/LICENSE) to determine the given images are real or fake. License types will be inherited when you intend to utilize those models. Please check the license types of those models for production purposes. DeepFace [logo](https://thenounproject.com/term/face-recognition/2965879/) is created by [Adrien Coquet](https://thenounproject.com/coquet_adrien/) and it is licensed under [Creative Commons: By Attribution 3.0 License](https://creativecommons.org/licenses/by/3.0/).