In the past period of April-July ZEEF studied the combination of search and curation we blogged about earlier, in collaboration with Matthijs: a Master student from the UvA. We investigated a number of variables which characterize ZEEF, and are absent from Google’s search engine, to see what the effect would be on user satisfaction:
- Organization of links into link blocks
- Curator profile (shown above a ZEEF page)
- Human re-ranking of search results by relevance
We created custom pages with and without these variables present, which were sent to the crowd at Mechanical Turk. The Turkers compared the pages to each other according to the image on the right, based on a questionnaire we developed. By using mTurk, we could very quickly gather the opinions of the 360 people we needed for this study.
We found a number of interesting things. First, it seems the blocks ZEEF uses enhance user satisfaction. This may have to do with them reducing the effort needed to process the information on a page. In addition, as we mentioned in our previous post on this study: curation adds context, which can add value.
However, we also got some unexpected results. Both the re-ranking of results and the curator profile did not have the effect we anticipated, showing no significant changes in user satisfaction. That doesn’t necessarily mean users think they are a bad thing, but it also doesn’t boost satisfaction in the way we expected. In the case of re-ranking, it may be that the re-ordering of results simply doesn’t stand out as much, so users may not notice it.
The curator profile also raised some questions as to why it didn’t have the effect we anticipated. Literature suggests there may be a trust issue: even though curators on ZEEF are knowledgeable on the subjects they manage, they may not always be well-known. And, as Morris et al. (2010) point out: people trust people they know more than strangers.
These results can be of value to us by pointing out in which direction we can do further research to make ZEEF an even better tool than it already is. First, as the curator profile in its current form doesn’t have the effect we anticipated, we will be testing different forms of the profile on the live platform using A/B testing, to see which version people like the best. We will experiment with adding profile information of a well-known, influential person to a page, as well as with displaying the profile in a much less prominent way. By looking at how the different versions of the pages perform, we can decide on how to present curator information in the future.
We may also experiment with filtering instead of just re-ranking. We can test with having people construct their own top 10 about a certain subject from the Google top 100 or their own knowledge, and see how their rankings correlate to each other and to the Google results. The results from such a comparison study may provide further insight into what people find relevant, and why.
For more information on the background and results of the study, you can read the complete report here.