Archive

zeef research

Top 14 tips for creating best curated page

We succeeded to find the main factors which make people continue using curated content in the future as a result of research project we have recently started.

These factors are:

  1. Agreement with the ranking of the links in the page.
  2. High level of trust in the curator of the page.

We have also developed Guidelines with 14 tips for creating high-quality pages appreciated by people, thus increasing content quality on ZEEF pages, user recognition and traffic.

We thank everyone who participated in the research and took our survey – you helped us a lot, and we’ve got very inspiring results to share with you.

Agreement with the ranking

The criteria taken into account for establishing the ranking are critical. If two persons base their rankings on different criteria (e.g. price of products in the shop and distance of the shop from the customer), they will receive two completely different lists. Therefore, in order to make two parties agree with each other, it is important to use the same ranking criteria. However, the results of our research showed that ranking criteria do not influence the level of agreement of ZEEF visitors, because the visitors come to ZEEF to find the information. Thus, they do not have enough knowledge to judge which link is the best. But this is not a reason to relax! The visitors have their expectations about ranking criteria, and the curator should always take them into account.

Trust in ZEEF

Because ZEEF is a curated platform, the level of trust in ZEEF mainly depends on the level of trust in a curator of a particular page. Interesting that such aspects as how and why a curator makes up the lists do not influence the perception of the quality by the visitors. In any case, majority of them tend to see what they want to see: a high-quality, recently updated page with ranked lists of links which is made by a professional for the sake of helping people to find the best and most relevant sources of information. This can be explained by trustful and high quality design of the pages created in ZEEF. However, personal information about the curator, profile picture, structure of the page and quality of the content in general are very important for establishing trust level of the visitors.

Guidelines

In total, there were no problems with the visitors’ perception of ZEEF pages, but we would like to show our curators what their expectations are, and how they think ZEEF pages are created. In a new Guidelines page you will see 14 tips on where to find the links, how to maintain the page after it is published, what information to include about yourself, how to rank the links, etc. We recommend to read these guidelines to all our old and new curators, regardless how many pages you have already published.

Read top 14 tips for createng best page

0
ranking-perception

How curators rank their lists – research study

The curators on ZEEF.com have an important role of ranking the links in their lists. However, is the perception of ranking of the curator the same as one of the visitor? Help us to answer this question!

Yana Ledeneva, a Master student of the University of Amsterdam, has started a new research project at ZEEF. The aim of this research project is to evaluate which factors the curators take into account when determining the ranking of their links, and whether the visitors of ZEEF pages perceive these rankings in a similar way curators do.

Idea of the research

On every ZEEF page there is an implicit dialogue between the curator and the visitor, which takes place through a clear, well filtered, SEO and spam-free ranking of links to web-sites that have the best content on a particular topic. This dialogue is complemented by a particular level of trust and agreement from the visitor.  If the visitors do not think that the content is fine and sound, they will disagree with the curator, thus reducing their level of trust. Therefore, for the curators on ZEEF, it is extremely important to keep the level of disagreement between themselves and their visitors as low as possible. In order to achieve this, it is essential to evaluate the process of making the ranking of the lists by the curators and compare it with the perception of rankings by ZEEF visitors.

In other words, in our research we address the issue of different perceptions of rankings by the curators and by the visitors to answer the following question:

“Do ZEEF curators take into account the same ranking factors as the visitors think should be taken?”

Description of the research

The research consists of two parts. In the first part we will analyze the main factors and parameters that are taken into account by the curators along with their methods of making rankings. The second part is dedicated to the question of how the visitors perceive the rankings on ZEEF pages and whether they agree with them. At the end of the research we will compare the results of both parties and analyze whether both the curators and the visitors perceive rankings in a similar way. Any discrepancy in the perceptions of both sides may result in a higher level of disagreement and mistrust, so with the help of this research we will be able to reveal existing problems and do everything to fix them.

This week we launch the surveys for our research, so we kindly ask both the curators of ZEEF pages and the visitors to answer the questions of our surveys. We value your time, and promise that it will not take more than 10 minutes. We guarantee that the data will be used only for research purposes and will not be shared with other parties.

Help us make the research as accurate and truthful as possible, so we are able to improve ZEEF and let people provide you with the best quality content!

Curators survey Visitors survey

0

Results UvA Study: ‘Adding a human touch to search results’

ResearchIn the past period of April-July ZEEF studied the combination of search and curation we blogged about earlier, in collaboration with Matthijs: a Master student from the UvA. We investigated a number of variables which characterize ZEEF, and are absent from Google’s search engine, to see what the effect would be on user satisfaction:

  • Organization of links into link blocks
  • Curator profile (shown above a ZEEF page)
  • Human re-ranking of search results by relevance

We created custom pages with and without these variables present, which were sent to the crowd at Mechanical Turk. The Turkers compared the pages to each other according to the image on the right, based on a questionnaire we developed. By using mTurk, we could very quickly gather the opinions of the 360 people we needed for this study.

Results

We found a number of interesting things. First, it seems the blocks ZEEF uses enhance user satisfaction. This may have to do with them reducing the effort needed to process the information on a page. In addition, as we mentioned in our previous post on this study: curation adds context, which can add value.

However, we also got some unexpected results. Both the re-ranking of results and the curator profile did not have the effect we anticipated, showing no significant changes in user satisfaction. That doesn’t necessarily mean users think they are a bad thing, but it also doesn’t boost satisfaction in the way we expected. In the case of re-ranking, it may be that the re-ordering of results simply doesn’t stand out as much, so users may not notice it.
The curator profile also raised some questions as to why it didn’t have the effect we anticipated. Literature suggests there may be a trust issue: even though curators on ZEEF are knowledgeable on the subjects they manage, they may not always be well-known. And, as Morris et al. (2010) point out: people trust people they know more than strangers.

What’s next?

These results can be of value to us by pointing out in which direction we can do further research to make ZEEF an even better tool than it already is. First, as the curator profile in its current form doesn’t have the effect we anticipated, we will be testing different forms of the profile on the live platform using A/B testing, to see which version people like the best. We will experiment with adding profile information of a well-known, influential person to a page, as well as with displaying the profile in a much less prominent way. By looking at how the different versions of the pages perform, we can decide on how to present curator information in the future.

We may also experiment with filtering instead of just re-ranking. We can test with having people construct their own top 10 about a certain subject from the Google top 100 or their own knowledge, and see how their rankings correlate to each other and to the Google results. The results from such a comparison study may provide further insight into what people find relevant, and why.

For more information on the background and results of the study, you can read the complete report here.

0

Human (curation) vs. Machine (Algorithm) Ranking

The World Wide Web has become a maze for consumers who want to find information. People have to rely on search engine algorithms to find the best products or services. However, just having computers filter our information may not always be the best solution. At ZEEF we believe that people can filter the right information very effectively because they are able to put the information into perspective and relate it to other content. ZEEF is the ideal tool for this content curation process.

What Is Curation?

Content curation means sorting and showing web content in a well-structured way, focusing on a central idea or subject. Step by step a content curator chooses, classifies, organizes, and publishes information. The curator has the discretion to choose and share the best, most relevant content to a certain community. Thus, content curation is not merely about link collection or data gathering; it is about arranging existing content into the right context with proper annotation and presentation.

Curators know a product or service very well, and are therefore much better suited to give recommendations or advice than an automated result from a search engine. This is especially true when looking for recommendations or opinions, or when dealing with topics such as entertainment or technology. A study by Morris, Teevan and Panovich (2010) indicated that people trust other people better in these cases. These people can be experts (curators), but also friends or acquaintances, residing in the users’ social networks. Main reasons for this preference include trust issues. People indicated they trusted their peers over Google, or just didn’t trust the results Google provided because the query was very personal.

In addition, the search results from search engines showing up at the top of the ranking are not necessarily the best ones: companies can optimize their websites (or have this done for them for a fee) to appear at the top of the list, for example by increasing the number of links to their web sites. Nowadays, Google uses more than just the number of links to determine where a search result is shown, but it can still be a large contributing factor.

Algorithms in the Curation Process

Screen Shot 2014-04-18 at 16.09.29

While we believe people are capable of providing the best search results, we also recognize the value of algorithms. Since the Internet has grown enormously in the past 20 years, the days where you were able to read ‘the entire Internet’ in a single day are long gone. Instead, companies like Google try to index the internet, so users are able to find the information they want easily.

This works well, but because users are exposed to increasingly large amounts of information, they can’t keep up anymore. This phenomenon is known as an information overload, and has been defined in a number of ways. One perspective explains it as having not enough time to read everything we’re expected to read. Another perspective is the one made famous by Clay Shirky: “It’s not information overload, it’s filter failure.

What he means by this is that despite there being a lot of information available to read, there is no overload. He believes there is only the absence of filtering, caused by the internet enabling anyone to publish anything.

ZEEF aims to reintroduce a form of filtering by having curators make sense of the large amounts of information the algorithms present to users. So in a sense, they aren’t just competitors, but have to work together: the curators can find information through traditional search engine technology, and then filter out the irrelevant information, leaving only the best.

“I think curation is inherently a human/machine process – with humans as an essential part”.

Steven Rosenbaum, Chief Curator at WayWire.com

The best, however, is not necessarily what an algorithm thinks is best. This is illustrated by Zhong et al. (2013), who found that curation highlights different information from traditional methods such as search. This underpins our belief that curation is important: it allows valuable information to emerge from hiding.

However, Zhong et al. also mention curation being important for synchronizing communities: having them focus their attention on a selection of information to allow for richer conversations. This isn’t necessarily done on purpose – as curation is often seen as a personal activity rather than social – but nevertheless, their study found that a small subset of items received a vast majority of curation attention.

Other Examples

That curators or users can have a different opinion than the algorithms has not gone unnoticed. For example: Google did an experiment in which they asked users to indicate which search results they preferred, and Bing has experts create curated content: a collection of visual material and links, to go along with their ‘traditional’ search results. Another example is ROCKZi by Blekko, which allows people to vote on content they come across, contributing to an overview of what is hot on the Web.

Our Research

We have done an experiment a while ago, which we talked about before: we changed the order of Google search results for a query to something we thought was more appropriate, and asked users which they preferred. About 70% of the 70 people we asked preferred our curated ranking over the one from Google.

Now that we have this first confirmation, we would like to investigate scientifically whether our approach to re-ranking search results is perceived as better than the traditional Google results. In the end, it’s all about what the customer’s information need is, and that can be satisfied using different means. As we mentioned before, research indicated these means can include curators, friends and acquaintances.

Our research focuses on the presentation of content. ZEEF not only ranks information, but also categorizes it. We want to evaluate whether this curation process has an added value over just providing a list of results. To do this, we plan to compare Google results for certain queries with the corresponding ZEEF pages, and ask users how relevant they found the results, as well as how satisfied they were with them. In addition, we will compare the default Google list with our own ranked list in a similar way to what we did before.

In both cases, we are testing whether user opinions differ significantly for the different scenarios. Based on our previous experiment and what we found out by studying the field, we hope to find a scientific justification for our approach to content curation.

“In today’s world of content abundance,  the skill of  how to find, make sense, and share content that we need to be effective in our work is critical.”

Beth Kanter Trainer & Nonprofit Innovator in networks, learning, and social media, recognized by Business Week, Fast Co.

1
css.php