Object-oriented software metrics are important for understanding and guaranting the quality of object-oriented software. By comparing object-oriented software metrics with their thresholds, we can simply and intuitively evaluate whether there is a bug. The methods to deriving metrics thresholds mainly include unsupervised learning methods based on the distribution of metric data and supervised learning methods based on the relationship between the metrics and defect-proneness. The two types of methods have their own advantages and disadvantages:unsupervised methods do not require label information to derive thresholds and are easy to implement, but the resulting thresholds often have a low performance in defect prediction; supervised methods improve the defect prediction performance by machine learning algorithms, but they need label information to derive the thresholds, which is not easy to obtain, and the linking technology between metrics and defect-proneness is complex. In recent years, researchers of the two types of methods have continued to explore and made a great progress. At the same time, it is still challenging to derive the thresholds of object-oriented software metrics. This paper offers a systematic survey of recent research achievements in deriving metric thresholds. First, we introduce the research problem in object-oriented software metric threshold derivation. Then, we describe the current main research work in detail from two aspects:unsupervised and supervised learning methods. After that, we discuss related techniques. Finally, we summarize the opportunities and challenges in this field and outline the reaearch directions in the future.