Results UvA Study: ‘Adding a human touch to search results’

ResearchIn the past period of April-July ZEEF studied the combination of search and curation we blogged about earlier, in collaboration with Matthijs: a Master student from the UvA. We investigated a number of variables which characterize ZEEF, and are absent from Google’s search engine, to see what the effect would be on user satisfaction:

  • Organization of links into link blocks
  • Curator profile (shown above a ZEEF page)
  • Human re-ranking of search results by relevance

We created custom pages with and without these variables present, which were sent to the crowd at Mechanical Turk. The Turkers compared the pages to each other according to the image on the right, based on a questionnaire we developed. By using mTurk, we could very quickly gather the opinions of the 360 people we needed for this study.


We found a number of interesting things. First, it seems the blocks ZEEF uses enhance user satisfaction. This may have to do with them reducing the effort needed to process the information on a page. In addition, as we mentioned in our previous post on this study: curation adds context, which can add value.

However, we also got some unexpected results. Both the re-ranking of results and the curator profile did not have the effect we anticipated, showing no significant changes in user satisfaction. That doesn’t necessarily mean users think they are a bad thing, but it also doesn’t boost satisfaction in the way we expected. In the case of re-ranking, it may be that the re-ordering of results simply doesn’t stand out as much, so users may not notice it.
The curator profile also raised some questions as to why it didn’t have the effect we anticipated. Literature suggests there may be a trust issue: even though curators on ZEEF are knowledgeable on the subjects they manage, they may not always be well-known. And, as Morris et al. (2010) point out: people trust people they know more than strangers.

What’s next?

These results can be of value to us by pointing out in which direction we can do further research to make ZEEF an even better tool than it already is. First, as the curator profile in its current form doesn’t have the effect we anticipated, we will be testing different forms of the profile on the live platform using A/B testing, to see which version people like the best. We will experiment with adding profile information of a well-known, influential person to a page, as well as with displaying the profile in a much less prominent way. By looking at how the different versions of the pages perform, we can decide on how to present curator information in the future.

We may also experiment with filtering instead of just re-ranking. We can test with having people construct their own top 10 about a certain subject from the Google top 100 or their own knowledge, and see how their rankings correlate to each other and to the Google results. The results from such a comparison study may provide further insight into what people find relevant, and why.

For more information on the background and results of the study, you can read the complete report here.


Human (curation) vs. Machine (Algorithm) Ranking

The World Wide Web has become a maze for consumers who want to find information. People have to rely on search engine algorithms to find the best products or services. However, just having computers filter our information may not always be the best solution. At ZEEF we believe that people can filter the right information very effectively because they are able to put the information into perspective and relate it to other content. ZEEF is the ideal tool for this content curation process.

What Is Curation?

Content curation means sorting and showing web content in a well-structured way, focusing on a central idea or subject. Step by step a content curator chooses, classifies, organizes, and publishes information. The curator has the discretion to choose and share the best, most relevant content to a certain community. Thus, content curation is not merely about link collection or data gathering; it is about arranging existing content into the right context with proper annotation and presentation.

Curators know a product or service very well, and are therefore much better suited to give recommendations or advice than an automated result from a search engine. This is especially true when looking for recommendations or opinions, or when dealing with topics such as entertainment or technology. A study by Morris, Teevan and Panovich (2010) indicated that people trust other people better in these cases. These people can be experts (curators), but also friends or acquaintances, residing in the users’ social networks. Main reasons for this preference include trust issues. People indicated they trusted their peers over Google, or just didn’t trust the results Google provided because the query was very personal.

In addition, the search results from search engines showing up at the top of the ranking are not necessarily the best ones: companies can optimize their websites (or have this done for them for a fee) to appear at the top of the list, for example by increasing the number of links to their web sites. Nowadays, Google uses more than just the number of links to determine where a search result is shown, but it can still be a large contributing factor.

Algorithms in the Curation Process

Screen Shot 2014-04-18 at 16.09.29

While we believe people are capable of providing the best search results, we also recognize the value of algorithms. Since the Internet has grown enormously in the past 20 years, the days where you were able to read ‘the entire Internet’ in a single day are long gone. Instead, companies like Google try to index the internet, so users are able to find the information they want easily.

This works well, but because users are exposed to increasingly large amounts of information, they can’t keep up anymore. This phenomenon is known as an information overload, and has been defined in a number of ways. One perspective explains it as having not enough time to read everything we’re expected to read. Another perspective is the one made famous by Clay Shirky: “It’s not information overload, it’s filter failure.

What he means by this is that despite there being a lot of information available to read, there is no overload. He believes there is only the absence of filtering, caused by the internet enabling anyone to publish anything.

ZEEF aims to reintroduce a form of filtering by having curators make sense of the large amounts of information the algorithms present to users. So in a sense, they aren’t just competitors, but have to work together: the curators can find information through traditional search engine technology, and then filter out the irrelevant information, leaving only the best.

“I think curation is inherently a human/machine process – with humans as an essential part”.

Steven Rosenbaum, Chief Curator at

The best, however, is not necessarily what an algorithm thinks is best. This is illustrated by Zhong et al. (2013), who found that curation highlights different information from traditional methods such as search. This underpins our belief that curation is important: it allows valuable information to emerge from hiding.

However, Zhong et al. also mention curation being important for synchronizing communities: having them focus their attention on a selection of information to allow for richer conversations. This isn’t necessarily done on purpose – as curation is often seen as a personal activity rather than social – but nevertheless, their study found that a small subset of items received a vast majority of curation attention.

Other Examples

That curators or users can have a different opinion than the algorithms has not gone unnoticed. For example: Google did an experiment in which they asked users to indicate which search results they preferred, and Bing has experts create curated content: a collection of visual material and links, to go along with their ‘traditional’ search results. Another example is ROCKZi by Blekko, which allows people to vote on content they come across, contributing to an overview of what is hot on the Web.

Our Research

We have done an experiment a while ago, which we talked about before: we changed the order of Google search results for a query to something we thought was more appropriate, and asked users which they preferred. About 70% of the 70 people we asked preferred our curated ranking over the one from Google.

Now that we have this first confirmation, we would like to investigate scientifically whether our approach to re-ranking search results is perceived as better than the traditional Google results. In the end, it’s all about what the customer’s information need is, and that can be satisfied using different means. As we mentioned before, research indicated these means can include curators, friends and acquaintances.

Our research focuses on the presentation of content. ZEEF not only ranks information, but also categorizes it. We want to evaluate whether this curation process has an added value over just providing a list of results. To do this, we plan to compare Google results for certain queries with the corresponding ZEEF pages, and ask users how relevant they found the results, as well as how satisfied they were with them. In addition, we will compare the default Google list with our own ranked list in a similar way to what we did before.

In both cases, we are testing whether user opinions differ significantly for the different scenarios. Based on our previous experiment and what we found out by studying the field, we hope to find a scientific justification for our approach to content curation.

“In today’s world of content abundance,  the skill of  how to find, make sense, and share content that we need to be effective in our work is critical.”

Beth Kanter Trainer & Nonprofit Innovator in networks, learning, and social media, recognized by Business Week, Fast Co.