You need to know: Algorithm, running time of the algorithm, worst-case running time.
Background: Assume that each object in a database has m grades, one for each of its m attributes. For each attribute, there is a sorted list, which lists each object and its grade under that attribute, sorted by grade (highest grade first). Each object is assigned an overall grade, which is a fixed monotone function of the attribute grades, such as min or average. The problem is to determine the top k objects with the highest overall grades.
The Theorem: On 6th September 2001, Ronald Fagin, Amnon Lotem, and Moni Naor submitted to the Journal of Computer and System Sciences a paper in which they developed an algorithm to solve the problem above and prove that it is instance optimal, that is, optimal (within a constant factor) for any database.
Short context: Traditionally, the efficiency of algorithms are estimated based on worst-case running time. However, given algorithm A for some problem, it is usually very difficult to prove that its worst-case running time is optimal. And even if it is, there still may exists a different algorithm B, with worse worst-case running time, but performing much faster than A on most instances. In contrast, the Theorem proves that some given algorithm is optimal (within a constant factor) for all instances! This is the strongest possible notion of algorithm optimality one can imagine.
Links: Free arxiv version of the original paper is here, journal version is here.