# Now that we have multiple ranking_pair instances, we can also use # cross_validate_ranking_trainer(). This performs cross-validation by splitting # the queries up into folds. That is, it lets the trainer train on a subset of # ranking_pair instances and tests on the rest. It does this over 4 different # splits and returns the overall ranking accuracy based on the held out data. # Just like test_ranking_function(), it reports both the ordering accuracy and # mean average precision. print "cross validation results: ", dlib.cross_validate_ranking_trainer( trainer, queries, 4) # Finally, note that the ranking tools also support the use of sparse vectors in # addition to dense vectors (which we used above). So if we wanted to do # exactly what we did in the first part of the example program above but using # sparse vectors we would do it like so: data = dlib.sparse_ranking_pair() samp = dlib.sparse_vector() # Make samp represent the same vector as dlib.vector([1, 0]). In dlib, a sparse # vector is just an array of pair objects. Each pair stores an index and a # value. Moreover, the svm-ranking tools require sparse vectors to be sorted # and to have unique indices. This means that the indices are listed in # increasing order and no index value shows up more than once. If necessary, # you can use the dlib.make_sparse_vector() routine to make a sparse vector # object properly sorted and contain unique indices. samp.append(dlib.pair(0, 1)) data.relevant.append(samp) # Now make samp represent the same vector as dlib.vector([0, 1]) samp.clear() samp.append(dlib.pair(1, 1))
# Now that we have multiple ranking_pair instances, we can also use # cross_validate_ranking_trainer(). This performs cross-validation by splitting # the queries up into folds. That is, it lets the trainer train on a subset of # ranking_pair instances and tests on the rest. It does this over 4 different # splits and returns the overall ranking accuracy based on the held out data. # Just like test_ranking_function(), it reports both the ordering accuracy and # mean average precision. print("Cross validation results: {}".format( dlib.cross_validate_ranking_trainer(trainer, queries, 4))) # Finally, note that the ranking tools also support the use of sparse vectors in # addition to dense vectors (which we used above). So if we wanted to do # exactly what we did in the first part of the example program above but using # sparse vectors we would do it like so: data = dlib.sparse_ranking_pair() samp = dlib.sparse_vector() # Make samp represent the same vector as dlib.vector([1, 0]). In dlib, a sparse # vector is just an array of pair objects. Each pair stores an index and a # value. Moreover, the svm-ranking tools require sparse vectors to be sorted # and to have unique indices. This means that the indices are listed in # increasing order and no index value shows up more than once. If necessary, # you can use the dlib.make_sparse_vector() routine to make a sparse vector # object properly sorted and contain unique indices. samp.append(dlib.pair(0, 1)) data.relevant.append(samp) # Now make samp represent the same vector as dlib.vector([0, 1]) samp.clear() samp.append(dlib.pair(1, 1))