【目标检测】You Only Look Once Unified Real Time Object Detection

You Only Look Once: Unified, Real-Time Object Detection
您只需查看一次:统一的实时目标检测

Abstract

We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.
本文提出YOLO,一种新的目标检测方法。之前关于目标检测的工作重新利用分类器来进行检测。相反,我们将目标检测框架为空间分离的边界框和相关的类概率的回归问题。在一次评估中,单个神经网络直接从完整图像中预测边界框和类概率。由于整个检测流水线是一个单一的网络,因此可以直接对检测性能进行端到端的优化。
Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations ofobjects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
我们的统一架构非常快。我们的基本YOLO模型以每秒45帧的速度实时处理图像。该网络的一个较小版本Fast YOLO,处理速度达到惊人的155帧/秒,同时仍然实现了其他实时检测器的mAP的两倍。与最先进的检测系统相比,YOLO有更多的定位误差,但不太可能预测背景的误报。最后,YOLO可以学习物体的一般表示。当从自然图像泛化到艺术作品等其他领域时,它优于其他检测方法,包括DPM和R-CNN。

1. Introduction

Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms for object detection would allow computers to drive cars without specialized sensors, enable assistive devices to convey real-time scene information to human users, and unlock the potential for general purpose, responsive robotic systems.
人类瞥一眼图像,就立刻知道图像中有什么物体,它们在哪里,以及它们如何相互作用。人类的视觉系统是快速和准确的,使我们能够执行复杂的任务,如驾驶时很少有意识的思考。快速、准确的目标检测算法将允许计算机在没有专门传感器的情况下驾驶汽车,使辅助设备能够向人类用户传递实时场景信息,并释放用于通用、响应式机器人系统的潜力。
Current detection systems repurpose classifiers to perform detection. To detect an object, these systems take a classifier for that object and evaluate it at various locations and scales in a test image. Systems like deformable parts models (DPM) use a sliding window approach where the classifier is run at evenly spaced locations over the entire image [10].
当前的检测系统重新使用分类器来执行检测。为了检测对象,这些系统采用该对象的分类器,并在测试图像中的各种位置和尺度处评估该对象。像可变形零件模型(DPM)这样的系统使用滑动窗口方法,其中分类器在整个图像上的均匀间隔位置处运行[10]。
More recent approaches like R-CNN use region proposal methods to first generate potential bounding boxes in an image and then run a classifier on these proposed boxes. After classification, post-processing is used to refine the bounding boxes, eliminate duplicate detections, and rescore the boxes based on other objects in the scene [13]. These complex pipelines are slow and hard to optimize because each individual component must be trained separately.
最近的方法,如R-CNN,使用区域建议方法首先在图像中生成潜在的边界框,然后在这些建议的框上运行分类器。在分类之后,使用后处理来细化边界框、消除重复检测,并基于场景中的其他对象对框进行重新取芯[13]。这些复杂的流水线速度慢且难以优化,因为每个单独的组件都必须单独训练。
We reframe object detection as a single regression problem, straight from image pixels to bounding box coordinates and class probabilities. Using our system, you only look once (YOLO) at an image to predict what objects are present and where they are.
我们将目标检测重构为一个单一的回归问题,直接从图像像素到边界框坐标和类概率。使用我们的系统,您只需看一次(YOLO)图像,即可预测存在哪些对象以及它们在哪里。
YOLO is refreshingly simple: see Figure 1. A single convolutional network simultaneously predicts multiple bounding boxes and class probabilities for those boxes. YOLO trains on full images and directly optimizes detection performance. This unified model has several benefits over traditional methods of object detection.
YOLO非常简单:参见图1。一个卷积网络同时预测多个边界框和这些框的类别概率。YOLO在全图像上进行训练,并直接优化检测性能。这种统一的模型比传统的目标检测方法有几个优点。

Figure 1: The YOLO Detection System. Processing images with YOLO is simple and straightforward. Our system (1) resizes the input image to 448 × 448, (2) runs a single convolutional network on the image, and (3) thresholds the resulting detections by the model’s confidence.
图1:YOLO检测系统。使用YOLO处理图像简单明了。我们的系统(1)将输入图像的大小调整为448 × 448,(2)在图像上运行单个卷积网络,(3)通过模型的置信度对结果检测进行阈值化。

First, YOLO is extremely fast. Since we frame detection as a regression problem we don’t need a complex pipeline. We simply run our neural network on a new image at test time to predict detections. Our base network runs at 45 frames per second with no batch processing on a Titan X GPU and a fast version runs at more than 150 fps. This means we can process streaming video in real-time with less than 25 milliseconds of latency. Furthermore, YOLO achieves more than twice the mean average precision of other real-time systems.
首先,YOLO的速度非常快。由于我们将检测视为回归问题,因此不需要复杂的管道。我们只是在测试时对新图像运行神经网络来预测检测结果。我们的基础网络以每秒45帧的速度运行,在Titan X GPU上没有批处理,快速版本的运行速度超过150 fps。这意味着我们可以实时处理流视频,延迟时间不到25毫秒。此外,YOLO的平均精度是其他实时系统的两倍多。
For a demo of our system running in real-time on a webcam please see our project webpage: http://pjreddie.com/yolo/.
有关我们的系统在网络摄像头上实时运行的演示,请参阅我们的项目网页:http://pjreddie.com/yolo/。
Second, YOLO reasons globally about the image when making predictions. Unlike sliding window and region proposal-based techniques, YOLO sees the entire image during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Fast R-CNN, a top detection method [14], mistakes background patches in an image for objects because it can’t see the larger context. YOLO makes less than half the number of background errors compared to Fast R-CNN.
第二,YOLO在做出预测时,会对形象进行全局性的推理。与基于滑动窗口和区域提议的技术不同,YOLO在训练和测试期间看到整个图像,因此它隐式地编码了关于类及其外观的上下文信息。Fast R-CNN是一种顶部检测方法[14],它会将图像中的背景块误认为是对象,因为它无法看到更大的上下文。与Fast R-CNN相比,YOLO的背景错误数量不到其一半。
Third, YOLO learns generalizable representations of objects. When trained on natural images and tested on artwork, YOLO outperforms top detection methods like DPM and R-CNN by a wide margin. Since YOLO is highly generalizable it is less likely to break down when applied to new domains or unexpected inputs.
第三,YOLO学习对象的可概括表示。当在自然图像上进行训练并在艺术品上进行测试时,YOLO的性能远远优于DPM和R-CNN等顶级检测方法。由于YOLO具有高度的可推广性,因此在应用于新领域或意外输入时,它不太可能崩溃。
YOLO still lags behind state-of-the-art detection systems in accuracy. While it can quickly identify objects in images it struggles to precisely localize some objects, especially small ones. We examine these tradeoffs further in our experiments.
YOLO在准确性上仍然落后于最先进的检测系统。虽然它可以快速识别图像中的物体,但它很难精确定位一些物体,尤其是小物体。我们将在实验中进一步考察这些权衡。
All of our training and testing code is open source. A variety of pretrained models are also available to download.
我们所有的培训和测试代码都是开源的。还可以下载各种预先训练的模型。

2. Unified Detection

2.统一检测
We unify the separate components of object detection into a single neural network. Our network uses features from the entire image to predict each bounding box. It also predicts all bounding boxes across all classes for an image simultaneously. This means our network reasons globally about the full image and all the objects in the image. The YOLO design enables end-to-end training and realtime speeds while maintaining high average precision.
我们将目标检测的各个组件统一到一个神经网络中。我们的网络使用整个图像的特征来预测每个边界框。它还可以同时预测图像的所有类中的所有边界框。这意味着我们的网络会对整个图像和图像中的所有对象进行全局推理。YOLO设计可实现端到端的训练和实时速度,同时保持较高的平均精度。
Our system divides the input image into an S × S grid. If the center of an object falls into a grid cell, that grid cell is responsible for detecting that object.
该系统将输入图像划分为S × S网格。如果一个物体的中心落在一个网格单元中,该网格单元负责检测该物体。
Each grid cell predicts B bounding boxes and confidence scores for those boxes. These confidence scores reflect how confident the model is that the box contains an object and also how accurate it thinks the box is that it predicts. Formally we define confidence as $\operatorname{Pr}($ Object $) * \mathrm{IOU}{\text {pred }}^{\text {truth }}$. If no object exists in that cell, the confidence scores should be zero. Otherwise we want the confidence score to equal the intersection over union (IOU) between the predicted box and the ground truth.
每个网格单元预测B个边界框和这些框的置信度得分。这些置信度分数反映了模型对该框包含对象的置信度,以及模型认为该框的预测准确度。形式上,我们将置信定义为$\operatorname{Pr}($ Object $) * \mathrm{IOU}
{\text {pred }}^{\text {truth }}$。如果该单元格中不存在对象,则置信度分数应为零。否则,我们希望置信度得分等于预测框和地面真实值之间的并集交集(IOU)。
Each bounding box consists of 5 predictions: x, y, w, h, and confidence. The (x, y) coordinates represent the center of the box relative to the bounds of the grid cell. The width and height are predicted relative to the whole image. Finally the confidence prediction represents the IOU between the predicted box and any ground truth box.
每个边界框由5个预测组成:x、y、w、h和置信度。(x,y)坐标表示相对于网格单元格边界的框的中心。宽度和高度是相对于整个图像预测的。最后,置信度预测表示预测框和任何地面实况框之间的IOU。
Each grid cell also predicts C conditional class probabilities, $\operatorname{Pr}\left(\right.$ Class $_i \mid$ Object $)$. These probabilities are conditioned on the grid cell containing an object. We only predict one set of class probabilities per grid cell, regardless of the number of boxes B.
每个网格单元还预测C个条件类概率$\operatorname{Pr}\left(\right.$ Class $_i \mid$ Object $)$。这些概率取决于包含对象的网格单元。我们只预测每个网格单元格的一组类概率,而不考虑框B的数量。
At test time we multiply the conditional class probabilities and the individual box confidence predictions,
在测试时,我们将条件类概率与单个框置信度预测相乘,

which gives us class-specific confidence scores for each box. These scores encode both the probability of that class appearing in the box and how well the predicted box fits the object.
这给了我们每个盒子的类特定的置信度分数。这些分数编码该类出现在框中的概率以及预测的框与对象的匹配程度。

Figure 2: The Model. Our system models detection as a regression problem. It divides the image into an S×S grid and for each grid cell predicts B bounding boxes, confidence for those boxes, and C class probabilities. These predictions are encoded as an S × S × (B ∗ 5 +C) tensor.
图2:模型。我们的系统将检测建模为回归问题。它将图像划分为S×S网格,并为每个网格单元预测B边界框,这些框的置信度和C类概率。这些预测被编码为S × S ×(B * 5 +C)张量。

For evaluating YOLO on PASCAL VOC, we use S = 7, B = 2. PASCAL VOC has 20 labelled classes so C = 20. Our final prediction is a 7 × 7 × 30 tensor.
为了在PASCAL VOC上评估YOLO,我们使用S = 7,B = 2。PASCAL VOC有20个标记类,因此C = 20。我们的最终预测是一个7 × 7 × 30的张量。

2.1. Network Design

2.1.网络设计
We implement this model as a convolutional neural network and evaluate it on the PASCAL VOC detection dataset [9]. The initial convolutional layers of the network extract features from the image while the fully connected layers predict the output probabilities and coordinates.
我们将此模型实现为卷积神经网络,并在PASCAL VOC检测数据集上对其进行评估[9]。网络的初始卷积层从图像中提取特征,而全连通层预测输出概率和坐标。
Our network architecture is inspired by the GoogLeNet model for image classification [34]. Our network has 24 convolutional layers followed by 2 fully connected layers. Instead of the inception modules used by GoogLeNet, we simply use 1×1 reduction layers followed by 3×3 convolutional layers, similar to Lin et al [22]. The full network is shown in Figure 3.
我们的网络架构受到GoogLeNet图像分类模型的启发[34]。我们的网络有24个卷积层,后面是2个完全连接的层。我们不使用GoogleNet使用的初始模块,而是简单地使用1×1约简层,然后是3×3卷积层,类似于Lin等人[22]的方法。完整的网络如图3所示。

Figure 3: The Architecture. Our detection network has 24 convolutional layers followed by 2 fully connected layers. Alternating 1 × 1 convolutional layers reduce the features space from preceding layers. We pretrain the convolutional layers on the ImageNet classification task at half the resolution (224 × 224 input image) and then double the resolution for detection.
图3:架构。我们的检测网络有24个卷积层,后面是2个完全连接层。交替的1 × 1卷积层减少了前一层的特征空间。我们在ImageNet分类任务中以一半的分辨率(224 × 224输入图像)预训练卷积层,然后将分辨率加倍以进行检测。

We also train a fast version of YOLO designed to push the boundaries of fast object detection. Fast YOLO uses a neural network with fewer convolutional layers (9 instead of 24) and fewer filters in those layers. Other than the size of the network, all training and testing parameters are the same between YOLO and Fast YOLO.
我们还训练了YOLO的快速版本,旨在推动快速目标检测的边界。Fast YOLO使用的神经网络具有较少的卷积层(9层而不是24层)和较少的滤波器。除了网络规模之外,YOLO和Fast YOLO的所有培训和测试参数都相同。
The final output of our network is the 7 × 7 × 30 tensor of predictions.
我们的网络的最终输出是7 × 7 × 30的预测张量。

2.2. Training

2.2.训练
We pretrain our convolutional layers on the ImageNet 1000-class competition dataset [30]. For pretraining we use the first 20 convolutional layers from Figure 3 followed by a average-pooling layer and a fully connected layer. We train this network for approximately a week and achieve a single crop top-5 accuracy of 88% on the ImageNet 2012 validation set, comparable to the GoogLeNet models in Caffe’s Model Zoo [24]. We use the Darknet framework for all training and inference [26].
我们在ImageNet 1000级竞赛数据集上预训练卷积层[30]。对于预训练,我们使用图3中的前20个卷积层,然后是平均池化层和全连接层。我们对这个网络进行了大约一周的训练,并在ImageNet 2012验证集上实现了88%的单次crop top-5准确率,与Caffe的Model Zoo中的GoogLeNet模型相当。我们使用Darknet框架进行所有训练和推理[26]。
We then convert the model to perform detection. Ren et al. show that adding both convolutional and connected layers to pretrained networks can improve performance [29]. Following their example, we add four convolutional layers and two fully connected layers with randomly initialized weights. Detection often requires fine-grained visual information so we increase the input resolution of the network from 224 × 224 to 448 × 448.
然后我们转换模型以执行检测。Ren等人表明,将卷积层和连接层添加到预训练的网络中可以提高性能[29]。按照他们的例子,我们添加了四个卷积层和两个具有随机初始化权重的全连接层。检测通常需要细粒度的视觉信息,因此我们将网络的输入分辨率从224 × 224提高到448 × 448。
Our final layer predicts both class probabilities and bounding box coordinates. We normalize the bounding box width and height by the image width and height so that they fall between 0 and 1. We parametrize the bounding box x and y coordinates to be offsets of a particular grid cell location so they are also bounded between 0 and 1.
我们的最后一层预测类概率和边界框坐标。我们通过图像的宽度和高度来规范化边界框的宽度和高度,使它们落在0和1之间。我们将边界框x和y坐标参数化为特定网格单元位置的偏移量,因此它们也被限制在0和1之间。
We use a linear activation function for the final layer and all other layers use the following leaky rectified linear activation:
我们对最后一层使用线性激活函数,所有其他层使用以下泄漏整流线性激活:

We optimize for sum-squared error in the output of our model. We use sum-squared error because it is easy to optimize, however it does not perfectly align with our goal of maximizing average precision. It weights localization error equally with classification error which may not be ideal. Also, in every image many grid cells do not contain any object. This pushes the “confidence” scores of those cells towards zero, often overpowering the gradient from cells that do contain objects. This can lead to model instability, causing training to diverge early on.
我们优化了模型输出的平方和误差。我们使用平方和误差,因为它很容易优化,但它并不完全符合我们最大化平均精度的目标。它将定位误差与分类误差相等地加权,这可能是不理想的。此外,在每个图像中,许多网格单元不包含任何对象。这会将这些单元格的“置信度”分数推向零,通常会压倒包含对象的单元格的梯度。这可能导致模型不稳定,导致训练在早期出现分歧。
To remedy this, we increase the loss from bounding box coordinate predictions and decrease the loss from confidence predictions for boxes that don’t contain objects. We use two parameters, $λ_{coord}$ and $λ_{noobj}$ to accomplish this. We set $λ_{coord}$ = 5 and $λ_{noobj}$ = .5.
为了解决这个问题,我们增加了边界框坐标预测的损失,并减少了不包含对象的框的置信度预测的损失。我们使用两个参数$λ_{coord}$$λ_{noobj}$来实现这一点。我们设置$λ_{coord}$ = 5$λ_{noobj}$ = .5。
Sum-squared error also equally weights errors in large boxes and small boxes. Our error metric should reflect that small deviations in large boxes matter less than in small boxes. To partially address this we predict the square root of the bounding box width and height instead of the width and height directly.
平方和误差也同样加权大框和小框中的误差。我们的误差度量应该反映出大盒子中的小偏差比小盒子中的小偏差更重要。为了部分解决这个问题,我们预测边界框宽度和高度的平方根,而不是直接预测宽度和高度。
YOLO predicts multiple bounding boxes per grid cell. At training time we only want one bounding box predictor to be responsible for each object. We assign one predictor to be “responsible” for predicting an object based on which prediction has the highest current IOU with the ground truth. This leads to specialization between the bounding box predictors. Each predictor gets better at predicting certain sizes, aspect ratios, or classes of object, improving overall recall.
YOLO预测每个网格单元的多个边界框。在训练时,我们只希望一个边界框预测器负责每个对象。我们分配一个预测器来“负责”预测对象,基于该预测具有最高的当前IOU与地面真相。这导致边界框预测器之间的专门化。每个预测器都能更好地预测物体的特定大小、长宽比或类别,从而提高整体召回率。
During training we optimize the following, multi-part loss function:
在训练过程中,我们优化了以下多部分损失函数:

where $\mathbb{1}i^{\mathrm{obj}}$ denotes if object appears in cell i and $\mathbb{1}{ij}^{\mathrm{obj}}$ denotes that the jth bounding box predictor in cell i is “responsible” for that prediction.
其中$\mathbb{1}i^{\mathrm{obj}}$表示对象是否出现在单元i中,$\mathbb{1}{ij}^{\mathrm{obj}}$表示单元i中的第j个边界框预测器“负责”该预测。
Note that the loss function only penalizes classification error if an object is present in that grid cell (hence the conditional class probability discussed earlier). It also only penalizes bounding box coordinate error if that predictor is “responsible” for the ground truth box (i.e. has the highest IOU of any predictor in that grid cell).
请注意,损失函数仅在对象存在于该网格单元中时才惩罚分类错误(因此前面讨论了条件类概率)。如果预测器“负责”地面实况框(即,具有该网格单元中的任何预测器的最高IOU),则它也仅惩罚边界框坐标误差。
We train the network for about 135 epochs on the training and validation data sets from PASCAL VOC 2007 and 2012. When testing on 2012 we also include the VOC 2007 test data for training. Throughout training we use a batch size of 64, a momentum of 0.9 and a decay of 0.0005.
我们在PASCAL VOC 2007和2012的训练和验证数据集上训练了大约135个epoch。在2012年测试时,我们还包括VOC 2007测试数据用于培训。在整个训练过程中,我们使用的批量大小为64,动量为0.9,衰减为0.0005。
Our learning rate schedule is as follows: For the first epochs we slowly raise the learning rate from 10−3 to 10−2. If we start at a high learning rate our model often diverges due to unstable gradients. We continue training with 10−2 for 75 epochs, then 10−3 for 30 epochs, and finally 10−4 for 30 epochs.
我们的学习率计划如下:对于第一个时期,我们慢慢地将学习率从10−3提高到10−2。如果我们以高学习率开始,我们的模型通常会因为不稳定的梯度而发散。我们继续用10−2训练75个epoch,然后用10−3训练30个epoch,最后用10−4训练30个epoch。
To avoid overfitting we use dropout and extensive data augmentation. A dropout layer with rate = .5 after the first connected layer prevents co-adaptation between layers [18]. For data augmentation we introduce random scaling and translations of up to 20% of the original image size. We also randomly adjust the exposure and saturation of the image by up to a factor of 1.5 in the HSV color space.
为了避免过度拟合,我们使用dropout和广泛的数据增强。在第一个连接层之后的速率= .5的dropout层防止层之间的相互适应[18]。对于数据增强,我们引入了高达原始图像大小20%的随机缩放和平移。我们还在HSV颜色空间中随机调整图像的曝光和饱和度,最高可达1.5倍。

2.3. Inference

2.3.推理
Just like in training, predicting detections for a test image only requires one network evaluation. On PASCAL VOC the network predicts 98 bounding boxes per image and class probabilities for each box. YOLO is extremely fast at test time since it only requires a single network evaluation, unlike classifier-based methods.
就像在训练中一样,预测测试图像的检测结果只需要一次网络评估。在PASCAL VOC上,网络预测每个图像的98个边界框和每个框的类概率。YOLO在测试时非常快,因为它只需要一个网络评估,不像基于分类器的方法。
The grid design enforces spatial diversity in the bounding box predictions. Often it is clear which grid cell an object falls in to and the network only predicts one box for each object. However, some large objects or objects near the border of multiple cells can be well localized by multiple cells. Non-maximal suppression can be used to fix these multiple detections. While not critical to performance as it is for R-CNN or DPM, non-maximal suppression adds 23% in mAP.
网格设计加强了边界框预测的空间多样性。通常,对象所属的网格单元很清楚,网络仅为每个对象预测一个框。然而,一些较大的物体或靠近多个单元格边界的物体可以被多个单元格很好地定位。非极大值抑制可用于修复这些多重检测。虽然不像R-CNN或DPM那样对性能至关重要,但非最大抑制使mAP增加了23%。

2.4. Limitations of YOLO

2.4.YOLO的局限性
YOLO imposes strong spatial constraints on bounding box predictions since each grid cell only predicts two boxes and can only have one class. This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.
YOLO对边界框预测施加了很强的空间约束,因为每个网格单元只能预测两个框,并且只能有一个类。这种空间约束限制了我们的模型可以预测的附近对象的数量。我们的模型与成群出现的小物体(如鸟群)进行斗争。
Since our model learns to predict bounding boxes from data, it struggles to generalize to objects in new or unusual aspect ratios or configurations. Our model also uses relatively coarse features for predicting bounding boxes since our architecture has multiple downsampling layers from the input image.
由于我们的模型学会了从数据中预测边界框,因此它很难推广到新的或不寻常的长宽比或配置的对象。我们的模型还使用相对粗糙的特征来预测边界框,因为我们的架构具有来自输入图像的多个下采样层。
Finally, while we train on a loss function that approximates detection performance, our loss function treats errors the same in small bounding boxes versus large bounding boxes. A small error in a large box is generally benign but a small error in a small box has a much greater effect on IOU. Our main source of error is incorrect localizations.
最后,当我们在近似检测性能的损失函数上训练时,我们的损失函数在小边界框和大边界框中处理错误是相同的。大盒子里的小错误通常是良性的,但小盒子里的小错误对IOU的影响要大得多。我们的主要错误来源是不正确的本地化。

3. Comparison to Other Detection Systems

3.与其他检测系统的比较
Object detection is a core problem in computer vision. Detection pipelines generally start by extracting a set of robust features from input images (Haar [25], SIFT [23], HOG [4], convolutional features [6]). Then, classifiers [36, 21, 13, 10] or localizers [1, 32] are used to identify objects in the feature space. These classifiers or localizers are run either in sliding window fashion over the whole image or on some subset of regions in the image [35, 15, 39]. We compare the YOLO detection system to several top detection frameworks, highlighting key similarities and differences.
目标检测是计算机视觉中的核心问题。检测管道通常从输入图像中提取一组鲁棒特征开始(Haar [25],SIFT [23],HOG [4],卷积特征[6])。然后,使用分类器[36,21,13,10]或定位器[1,32]来识别特征空间中的对象。这些分类器或定位器以滑动窗口的方式在整个图像上运行,或者在图像中的一些区域子集上运行[35,15,39]。我们将YOLO检测系统与几个顶级检测框架进行了比较,突出了关键的相似性和差异性。

Deformable parts models.

Deformable parts models (DPM) use a sliding window approach to object detection [10]. DPM uses a disjoint pipeline to extract static features, classify regions, predict bounding boxes for high scoring regions, etc. Our system replaces all of these disparate parts with a single convolutional neural network. The network performs feature extraction, bounding box prediction, nonmaximal suppression, and contextual reasoning all concurrently. Instead of static features, the network trains the features in-line and optimizes them for the detection task. Our unified architecture leads to a faster, more accurate model than DPM.
可变形零件模型(Deformable Parts Models,缩写为DEEP)使用滑动窗口方法来进行对象检测[10]。我们的系统使用一个不相交的管道来提取静态特征,对区域进行分类,预测高分区域的边界框等。该网络同时执行特征提取、边界框预测、非最大抑制和上下文推理。代替静态特征,网络在线训练特征并优化它们以用于检测任务。我们的统一架构带来了一个更快、更准确的模型,而不是传统的。

R-CNN.

R-CNN and its variants use region proposals instead of sliding windows to find objects in images. Selective Search [35] generates potential bounding boxes, a convolutional network extracts features, an SVM scores the boxes, a linear model adjusts the bounding boxes, and non-max suppression eliminates duplicate detections. Each stage of this complex pipeline must be precisely tuned independently and the resulting system is very slow, taking more than 40 seconds per image at test time [14].
R-CNN及其变体使用区域建议而不是滑动窗口来查找图像中的对象。选择性搜索[35]生成潜在的边界框,卷积网络提取特征,SVM对框进行评分,线性模型调整边界框,非最大抑制消除重复检测。这个复杂的流水线的每个阶段都必须独立地进行精确调整,并且由此产生的系统非常慢,在测试时每个图像需要超过40秒[14]。
YOLO shares some similarities with R-CNN. Each grid cell proposes potential bounding boxes and scores those boxes using convolutional features. However, our system puts spatial constraints on the grid cell proposals which helps mitigate multiple detections of the same object. Our system also proposes far fewer bounding boxes, only 98 per image compared to about 2000 from Selective Search. Finally, our system combines these individual components into a single, jointly optimized model.
YOLO与R-CNN有一些相似之处。每个网格单元提出潜在的边界框,并使用卷积特征对这些框进行评分。然而,我们的系统对网格单元的建议提出了空间约束,这有助于减轻对同一对象的多次检测。我们的系统还提出了更少的边界框,每个图像只有98个,而选择性搜索大约有2000个。最后,我们的系统将这些单独的组件组合成一个单一的,联合优化的模型。

Other Fast Detectors

其他快速探测器
Fast and Faster R-CNN focus on speeding up the R-CNN framework by sharing computation and using neural networks to propose regions instead of Selective Search [14] [28]. While they offer speed and accuracy improvements over R-CNN, both still fall short of real-time performance.
快速和更快的R-CNN专注于通过共享计算和使用神经网络来提出区域而不是选择性搜索来加速R-CNN框架[14] [28]。虽然它们提供了比R-CNN更快的速度和准确性,但两者仍然达不到实时性能。
Many research efforts focus on speeding up the DPM pipeline [31] [38] [5]. They speed up HOG computation, use cascades, and push computation to GPUs. However, only 30Hz DPM [31] actually runs in real-time.
许多研究工作都集中在加速这条管道上[31] [38] [5]。它们加速了HOG计算,使用级联,并将计算推送到GPU。然而,只有30 Hz的频率[31]实际上是实时运行的。
Instead of trying to optimize individual components of a large detection pipeline, YOLO throws out the pipeline entirely and is fast by design.
YOLO没有试图优化大型检测管道中的单个组件,而是完全抛弃了管道,并且设计快速。
Detectors for single classes like faces or people can be highly optimized since they have to deal with much less variation [37]. YOLO is a general purpose detector that learns to detect a variety of objects simultaneously.
针对人脸或人等单一类别的检测器可以高度优化,因为它们必须处理更少的变化[37]。YOLO是一个通用的探测器,可以学习同时探测各种物体。

Deep MultiBox.

Unlike R-CNN, Szegedy et al. train a convolutional neural network to predict regions of interest [8] instead of using Selective Search. MultiBox can also perform single object detection by replacing the confidence prediction with a single class prediction. However, MultiBox cannot perform general object detection and is still just a piece in a larger detection pipeline, requiring further image patch classification. Both YOLO and MultiBox use a convolutional network to predict bounding boxes in an image but YOLO is a complete detection system.
与R-CNN不同,Szegedy等人训练卷积神经网络来预测感兴趣的区域[8],而不是使用选择性搜索。MultiBox还可以通过将置信度预测替换为单个类别预测来执行单个对象检测。然而,MultiBox无法执行一般的对象检测,仍然只是更大的检测管道中的一部分,需要进一步的图像块分类。YOLO和MultiBox都使用卷积网络来预测图像中的边界框,但YOLO是一个完整的检测系统。

OverFeat.

Sermanet et al. train a convolutional neural network to perform localization and adapt that localizer to perform detection [32]. OverFeat efficiently performs sliding window detection but it is still a disjoint system. OverFeat optimizes for localization, not detection performance. Like DPM, the localizer only sees local information when making a prediction. OverFeat cannot reason about global context and thus requires significant post-processing to produce coherent detections.
Sermanet等人训练卷积神经网络来执行定位,并调整定位器来执行检测[32]。OverFeat有效地执行滑动窗口检测,但它仍然是一个不相交的系统。OverFeat针对定位而非检测性能进行优化。与BLOG一样,定位器在进行预测时只看到本地信息。OverFeat无法推理全局上下文,因此需要大量的后处理来产生相干检测。

MultiGrasp.

Our work is similar in design to work on grasp detection by Redmon et al [27]. Our grid approach to bounding box prediction is based on the MultiGrasp system for regression to grasps. However, grasp detection is a much simpler task than object detection. MultiGrasp only needs to predict a single graspable region for an image containing one object. It doesn’t have to estimate the size, location, or boundaries of the object or predict it’s class, only find a region suitable for grasping. YOLO predicts both bounding boxes and class probabilities for multiple objects of multiple classes in an image.
我们的工作在设计上与雷德蒙等人[27]的抓取检测工作相似。我们的边界框预测网格方法基于MultiGrasp系统,用于回归抓取。然而,抓取检测是比对象检测简单得多的任务。MultiGrasp只需要预测包含一个对象的图像的单个可抓取区域。它不需要估计物体的大小、位置或边界,也不需要预测它的类别,只需要找到一个适合抓取的区域。YOLO预测图像中多个类别的多个对象的边界框和类别概率。

4. Experiments

First we compare YOLO with other real-time detection systems on PASCAL VOC 2007. To understand the differences between YOLO and R-CNN variants we explore the errors on VOC 2007 made by YOLO and Fast R-CNN, one of the highest performing versions of R-CNN [14]. Based on the different error profiles we show that YOLO can be used to rescore Fast R-CNN detections and reduce the errors from background false positives, giving a significant performance boost. We also present VOC 2012 results and compare mAP to current state-of-the-art methods. Finally, we show that YOLO generalizes to new domains better than other detectors on two artwork datasets.
首先,我们比较YOLO与其他实时检测系统的PASCAL VOC 2007。为了理解YOLO和R-CNN变体之间的差异,我们探索了YOLO和Fast R-CNN(R-CNN的最高性能版本之一)在VOC 2007上的错误[14]。基于不同的错误配置文件,我们表明YOLO可用于对快速R-CNN检测进行重新评分,并减少背景误报的错误,从而显着提高性能。我们还介绍了VOC 2012年的结果,并将mAP与当前最先进的方法进行了比较。最后,我们证明了YOLO在两个艺术品数据集上比其他检测器更好地推广到新的领域。

Table 1: Real-Time Systems on PASCAL VOC 2007. Comparing the performance and speed of fast detectors. Fast YOLO is the fastest detector on record for PASCAL VOC detection and is still twice as accurate as any other real-time detector. YOLO is 10 mAP more accurate than the fast version while still well above real-time in speed.
表1:PASCAL VOC 2007上的实时系统。比较快速检测器的性能和速度。Fast YOLO是PASCAL VOC检测记录中速度最快的检测器,其准确度仍然是任何其他实时检测器的两倍。YOLO比快速版本精确10 mAP,同时速度仍远高于实时。

4.1. Comparison to Other Real-Time Systems

4.1.与其他实时系统的比较
Many research efforts in object detection focus on making standard detection pipelines fast. [5] [38] [31] [14] [17] [28] However, only Sadeghi et al. actually produce a detection system that runs in real-time (30 frames per second or better) [31]. We compare YOLO to their GPU implementation of DPM which runs either at 30Hz or 100Hz. While the other efforts don’t reach the real-time milestone we also compare their relative mAP and speed to examine the accuracy-performance tradeoffs available in object detection systems.
目标检测中的许多研究工作都集中在使标准检测流水线更快。[5][38] [31] [14] [17] [28]然而,只有Sadeghi等人真正产生了实时运行的检测系统(每秒30帧或更好)[31]。我们将YOLO与他们的GPU实现进行比较,后者运行在30 Hz或100 Hz。虽然其他努力没有达到实时的里程碑,我们也比较他们的相对mAP和速度,以检查目标检测系统中可用的准确性-性能权衡。
Fast YOLO is the fastest object detection method on PASCAL; as far as we know, it is the fastest extant object detector. With 52.7% mAP, it is more than twice as accurate as prior work on real-time detection. YOLO pushes mAP to 63.4% while still maintaining real-time performance.
Fast YOLO是PASCAL上最快的对象检测方法;据我们所知,它是现存最快的对象检测器。它具有52.7%的mAP,是之前实时检测工作的两倍多。YOLO将mAP提升至63.4%,同时仍保持实时性能。
We also train YOLO using VGG-16. This model is more accurate but also significantly slower than YOLO. It is useful for comparison to other detection systems that rely on VGG-16 but since it is slower than real-time the rest of the paper focuses on our faster models.
我们还使用VGG-16训练YOLO。这个模型更准确,但也比YOLO慢得多。与依赖VGG-16的其他检测系统进行比较是有用的,但由于它比实时慢,本文的其余部分将重点放在我们更快的模型上。
Fastest DPM effectively speeds up DPM without sacrificing much mAP but it still misses real-time performance by a factor of 2 [38]. It also is limited by DPM’s relatively low accuracy on detection compared to neural network approaches.
最快的缓存有效地加快了缓存速度,而不会牺牲太多的mAP,但它仍然错过了2倍的实时性能[38]。与神经网络方法相比,它也受到了神经网络检测精度相对较低的限制。
R-CNN minus R replaces Selective Search with static bounding box proposals [20]. While it is much faster than R-CNN, it still falls short of real-time and takes a significant accuracy hit from not having good proposals.
R-CNN minus R用静态边界框建议取代选择性搜索[20]。虽然它比R-CNN快得多,但它仍然达不到实时性,并且由于没有良好的建议而导致准确性受到严重影响。
Fast R-CNN speeds up the classification stage ofR-CNN but it still relies on selective search which can take around 2 seconds per image to generate bounding box proposals. Thus it has high mAP but at 0.5 fps it is still far from realtime.
Fast R-CNN加快了R-CNN的分类阶段,但它仍然依赖于选择性搜索,每个图像大约需要2秒来生成边界框建议。因此,它具有高mAP,但在0.5 fps时,它仍然远远不是实时的。
The recent Faster R-CNN replaces selective search with a neural network to propose bounding boxes, similar to Szegedy et al. [8] In our tests, their most accurate model achieves 7 fps while a smaller, less accurate one runs at 18 fps. The VGG-16 version of Faster R-CNN is 10 mAP higher but is also 6 times slower than YOLO. The ZeilerFergus Faster R-CNN is only 2.5 times slower than YOLO but is also less accurate.
最近的Faster R-CNN用神经网络取代了选择性搜索,提出了边界框,类似于Szegedy等人。[8]在我们的测试中,他们最准确的模型达到了7 fps,而一个较小的,不太准确的模型运行在18 fps。VGG-16版本的Faster R-CNN比YOLO高10 mAP,但也慢6倍。ZeilerFergus Faster R-CNN比YOLO慢2.5倍,但准确性也较低。

4.2. VOC 2007 Error Analysis

4.2.VOC 2007错误分析
To further examine the differences between YOLO and state-of-the-art detectors, we look at a detailed breakdown of results on VOC 2007. We compare YOLO to Fast RCNN since Fast R-CNN is one of the highest performing detectors on PASCAL and it’s detections are publicly available.
为了进一步研究YOLO和最先进的探测器之间的差异,我们查看了VOC 2007的详细结果。我们将YOLO与Fast RCNN进行比较,因为Fast R-CNN是PASCAL上性能最高的检测器之一,并且它的检测是公开可用的。
We use the methodology and tools of Hoiem et al. [19] For each category at test time we look at the top N predictions for that category. Each prediction is either correct or it is classified based on the type of error:
我们使用Hoiem等人的方法和工具。[19]对于测试时的每个类别,我们查看该类别的前N个预测。每个预测要么是正确的,要么是根据错误类型分类的:

  • Correct: correct class and IOU > .5
  • Localization: correct class, .1 < IOU < .5
  • Similar: class is similar, IOU > .1
  • Other: class is wrong, IOU > .1
  • Background: IOU < .1 for any object
    ·正确:正确的类和IOU > .5
    ·定位:正确的类,.1 < IOU < .5
    ·相似:类相似,IOU > .1
    ·其他:类错误,IOU > .1
    ·背景:任何对象的IOU < .1

Figure 4 shows the breakdown of each error type averaged across all 20 classes.
图4显示了所有20个类中每种错误类型的平均值。

Figure 4: Error Analysis: Fast R-CNN vs. YOLO These charts show the percentage of localization and background errors in the top N detections for various categories (N = # objects in that category).
图四:错误分析:快速R-CNN与YOLO这些图表显示了各种类别的前N个检测中的定位和背景错误的百分比(N =该类别中的对象数)。

YOLO struggles to localize objects correctly. Localization errors account for more ofYOLO’s errors than all other sources combined. Fast R-CNN makes much fewer localization errors but far more background errors. 13.6% of it’s top detections are false positives that don’t contain any objects. Fast R-CNN is almost 3x more likely to predict background detections than YOLO.
YOLO很难正确地定位对象。本地化错误比所有其他来源的错误加起来还要多。快速R-CNN的定位错误要少得多,但背景错误要多得多。13.6%的顶级检测是不包含任何对象的误报。Fast R-CNN预测背景检测的可能性几乎是YOLO的3倍。

4.3. Combining Fast R-CNN and YOLO

4.3.结合Fast R-CNN和YOLO
YOLO makes far fewer background mistakes than Fast R-CNN. By using YOLO to eliminate background detections from Fast R-CNN we get a significant boost in performance. For every bounding box that R-CNN predicts we check to see if YOLO predicts a similar box. If it does, we give that prediction a boost based on the probability predicted by YOLO and the overlap between the two boxes.
YOLO的背景错误比Fast R-CNN少得多。通过使用YOLO来消除Fast R-CNN的背景检测,我们的性能得到了显着提升。对于R-CNN预测的每个边界框,我们检查YOLO是否预测了类似的框。如果是这样,我们给予基于YOLO预测的概率和两个框之间的重叠的预测。
The best Fast R-CNN model achieves a mAP of 71.8% on the VOC 2007 test set. When combined with YOLO, its mAP increases by 3.2% to 75.0%. We also tried combining the top Fast R-CNN model with several other versions of Fast R-CNN. Those ensembles produced small increases in mAP between .3 and .6%, see Table 2 for details.
最好的Fast R-CNN模型在VOC 2007测试集上达到了71.8%的mAP。当与YOLO结合时,其mAP增加3.2%至75.0%。我们还尝试将顶级Fast R-CNN模型与其他几个版本的Fast R-CNN相结合。这些组合使mAP小幅增加0.3%至0.6%,详情见表2。

Table 2: Model combination experiments on VOC 2007. We examine the effect of combining various models with the best version of Fast R-CNN. Other versions of Fast R-CNN provide only a small benefit while YOLO provides a significant performance boost.
表2:2007年VOC模型组合实验。我们研究了将各种模型与Fast R-CNN的最佳版本相结合的效果。其他版本的Fast R-CNN只提供了很小的好处,而YOLO提供了显着的性能提升。

The boost from YOLO is not simply a byproduct of model ensembling since there is little benefit from combining different versions of Fast R-CNN. Rather, it is precisely because YOLO makes different kinds of mistakes at test time that it is so effective at boosting Fast R-CNN’s performance.
YOLO的提升不仅仅是模型集成的副产品,因为组合不同版本的Fast R-CNN几乎没有什么好处。相反,正是因为YOLO在测试时会犯不同类型的错误,所以它在提高Fast R-CNN的性能方面非常有效。
Unfortunately, this combination doesn’t benefit from the speed of YOLO since we run each model seperately and then combine the results. However, since YOLO is so fast it doesn’t add any significant computational time compared to Fast R-CNN.
不幸的是,这种组合并没有从YOLO的速度中受益,因为我们单独运行每个模型,然后联合收割机组合结果。然而,由于YOLO速度如此之快,与Fast R-CNN相比,它不会增加任何显著的计算时间。

4.4. VOC 2012 Results

4.4.VOC 2012结果
On the VOC 2012 test set, YOLO scores 57.9% mAP. This is lower than the current state of the art, closer to the original R-CNN using VGG-16, see Table 3. Our system struggles with small objects compared to its closest competitors. On categories like bottle, sheep, and tv/monitor YOLO scores 8-10% lower than R-CNN or Feature Edit. However, on other categories like cat and train YOLO achieves higher performance.
在VOC 2012测试集上,YOLO的mAP得分为57.9%。这低于目前的技术水平,更接近使用VGG-16的原始R-CNN,参见表3。与最接近的竞争对手相比,我们的系统在处理小对象时遇到了困难。在瓶子、绵羊和电视/监视器等类别上,YOLO的得分比R-CNN或Feature Edit低8-10%。然而,在其他类别,如猫和火车YOLO实现更高的性能。

Table 3: PASCAL VOC 2012 Leaderboard. YOLO compared with the full comp4 (outside data allowed) public leaderboard as of November 6th, 2015. Mean average precision and per-class average precision are shown for a variety of detection methods. YOLO is the only real-time detector. Fast R-CNN + YOLO is the forth highest scoring method, with a 2.3% boost over Fast R-CNN.
表3:PASCAL VOC 2012排行榜。YOLO与截至2015年11月6日的完整comp 4(允许外部数据)公开排行榜相比。显示了各种检测方法的平均精度和每类平均精度。YOLO是唯一的实时探测器。Fast R-CNN + YOLO是评分第四高的方法,比Fast R-CNN提高了2.3%。

Our combined Fast R-CNN + YOLO model is one of the highest performing detection methods. Fast R-CNN gets a 2.3% improvement from the combination with YOLO, boosting it 5 spots up on the public leaderboard.
我们组合的Fast R-CNN + YOLO模型是性能最高的检测方法之一。Fast R-CNN从与YOLO的组合中获得了2.3%的改进,在公共排行榜上提升了5个位置。

4.5. Generalizability: Person Detection in Artwork

4.5. 泛化性:艺术品中的人物检测
Academic datasets for object detection draw the training and testing data from the same distribution. In real-world applications it is hard to predict all possible use cases and the test data can diverge from what the system has seen before [3]. We compare YOLO to other detection systems on the Picasso Dataset [12] and the People-Art Dataset [3], two datasets for testing person detection on artwork.
用于对象检测的学术数据集从相同的分布中提取训练和测试数据。在现实世界的应用中,很难预测所有可能的用例,测试数据可能与系统之前看到的不同[3]。我们将YOLO与Picasso数据集[12]和People-Art数据集[3]上的其他检测系统进行了比较,这两个数据集用于测试艺术品上的人物检测。
Figure 5 shows comparative performance between YOLO and other detection methods. For reference, we give VOC 2007 detection AP on person where all models are trained only on VOC 2007 data. On Picasso models are trained on VOC 2012 while on People-Art they are trained on VOC 2010.
图5显示了YOLO和其他检测方法之间的比较性能。作为参考,我们给予VOC 2007检测AP的人,其中所有的模型都只在VOC 2007数据训练。在Picasso上,模型是在VOC 2012上训练的,而在People-Art上,模型是在VOC 2010上训练的。

(a) Picasso Dataset precision-recall curves.
(a)Picasso Dataset精确-召回曲线。
(b) Quantitative results on the VOC 2007, Picasso, and People-A
(b)VOC 2007、Picasso和People-A的定量结果
Figure 5: Generalization results on Picasso and People-Art datasets.
图5:Picasso和People-Art数据集的泛化结果。

R-CNN has high AP on VOC 2007. However, R-CNN drops off considerably when applied to artwork. R-CNN uses Selective Search for bounding box proposals which is tuned for natural images. The classifier step in R-CNN only sees small regions and needs good proposals.
R-CNN在2007年VOC上有很高的AP。然而,R-CNN在应用于艺术品时会大幅下降。R-CNN使用选择性搜索(Selective Search)来进行边界框建议,该建议针对自然图像进行了调整。R-CNN中的分类器步骤只能看到小区域,需要好的建议。
DPM maintains its AP well when applied to artwork. Prior work theorizes that DPM performs well because it has strong spatial models of the shape and layout of objects. Though DPM doesn’t degrade as much as R-CNN, it starts from a lower AP.
当应用于艺术品时,它能很好地保持其AP。先前的工作理论上认为,由于它具有强大的对象形状和布局的空间模型,因此它表现良好。虽然R-CNN的降级程度不如R-CNN,但它从较低的AP开始。
YOLO has good performance on VOC 2007 and its AP degrades less than other methods when applied to artwork. Like DPM, YOLO models the size and shape of objects, as well as relationships between objects and where objects commonly appear. Artwork and natural images are very different on a pixel level but they are similar in terms of the size and shape of objects, thus YOLO can still predict good bounding boxes and detections.
YOLO对VOC 2007有很好的性能,当应用于艺术品时,它的AP降解比其他方法少。与其他工具一样,YOLO也对对象的大小和形状以及对象之间的关系以及对象通常出现的位置进行建模。艺术品和自然图像在像素级别上非常不同,但它们在对象的大小和形状方面相似,因此YOLO仍然可以预测良好的边界框和检测。

5. Real-Time Detection In The Wild

5.野外实时检测
YOLO is a fast, accurate object detector, making it ideal for computer vision applications. We connect YOLO to a webcam and verify that it maintains real-time performance, including the time to fetch images from the camera and display the detections.
YOLO是一款快速、准确的物体检测器,非常适合计算机视觉应用。我们将YOLO连接到网络摄像头,并验证它是否保持实时性能,包括从摄像头获取图像和显示检测的时间。
The resulting system is interactive and engaging. While YOLO processes images individually, when attached to a webcam it functions like a tracking system, detecting objects as they move around and change in appearance.
由此产生的系统是交互式的和引人入胜的。虽然YOLO单独处理图像,但当连接到网络摄像头时,它的功能就像跟踪系统一样,可以在物体移动和外观变化时检测物体。
A demo of the system and the source code can be found on our project website: http://pjreddie.com/yolo/.
该系统的演示和源代码可以在我们的项目网站上找到:http://pjreddie.com/yolo/。

6. Conclusion

We introduce YOLO, a unified model for object detection. Our model is simple to construct and can be trained directly on full images. Unlike classifier-based approaches, YOLO is trained on a loss function that directly corresponds to detection performance and the entire model is trained jointly.
我们介绍YOLO,一个统一的目标检测模型。我们的模型构造简单,可以直接在完整图像上训练。与基于分类器的方法不同,YOLO是在直接对应于检测性能的损失函数上训练的,整个模型是联合训练的。
Fast YOLO is the fastest general-purpose object detector in the literature and YOLO pushes the state-of-the-art in real-time object detection. YOLO also generalizes well to new domains making it ideal for applications that rely on fast, robust object detection.
Fast YOLO是文献中速度最快的通用对象检测器,YOLO推动了实时对象检测的最新技术。YOLO还可以很好地推广到新的领域,使其成为依赖于快速,强大的对象检测的应用程序的理想选择。

Figure 6: Qualitative Results. YOLO running on sample artwork and natural images from the internet. It is mostly accurate although it does think one person is an airplane.
图6:定性结果。YOLO运行在来自互联网的样本艺术品和自然图像上。它基本上是准确的,尽管它确实认为一个人是一架飞机。

GPT解读

这篇文章介绍了一种新的实时目标检测方法,称为YOLO(You Only Look Once)。以下是文章的核心内容概述:

  1. YOLO方法介绍

    • YOLO将目标检测问题视为一个回归问题,直接从图像像素到边界框坐标和类别概率的预测。
    • 使用单一神经网络在一次评估中预测边界框和类别概率。
    • YOLO的统一架构非常快速,基础模型能够实时处理图像,速度达到每秒45帧。
    • 一个更小的版本,Fast YOLO,能够达到每秒155帧的处理速度,同时在平均精度(mAP)上是其他实时检测器的两倍。
  2. YOLO的优势

    • 快速:YOLO的检测流程简单,不需要复杂的管道处理。
    • 全局推理:YOLO在训练和测试时看到整个图像,能够隐式编码类别和外观的上下文信息。
    • 泛化能力:YOLO学习到的物体表示具有很好的泛化能力,能够在自然图像和其他领域(如艺术作品)之间进行泛化。
  3. YOLO的网络设计

    • YOLO网络由24个卷积层和2个全连接层组成,灵感来源于GoogLeNet模型。
    • 网络在PASCAL VOC数据集上进行预训练,然后转换为执行检测任务。
    • 使用了特定的损失函数来优化网络,包括边界框坐标预测的加权和非极大值抑制。
  4. YOLO的局限性

    • 空间约束:每个网格单元只预测两个边界框,限制了模型预测附近多个物体的能力。
    • 对小物体的检测存在挑战,尤其是在它们以群体出现时。
    • 使用相对粗糙的特征进行边界框预测,因为网络架构包含多个下采样层。
  5. 与其他检测系统的比较

    • YOLO与DPM(可变形部件模型)、R-CNN及其变体等传统目标检测方法相比,具有更快的速度和更高的平均精度。
    • YOLO在实时检测系统上的表现优于其他方法,并且在新领域(如艺术作品)的泛化能力更强。
  6. 实验结果

    • 在PASCAL VOC 2007和2012数据集上,YOLO展示了与其他实时检测系统的比较。
    • YOLO在艺术作品数据集上的表现表明其良好的泛化能力。
    • 结合Fast R-CNN和YOLO的方法能够显著提高性能。
  7. 结论

    • YOLO是一个简单、快速且泛化能力强的目标检测模型,适用于需要快速、稳健目标检测的应用场景。

文章还提到了YOLO的开源代码和预训练模型的可用性,以及对YOLO进行实时检测的演示。