Schloss Dagstuhl is a familiar name for many Computer Science researchers. A center founded in the early 1990s that has quickly become one of the world’s premier meeting centers for computer science research. It is also in the middle of nowhere in Germany, which is a pain in terms of traveling, but once you are there, this is it. It has all the amenities one could wish for.
In February/March 2017, we (Kevyn Collins-Thompson, Preben Hansen, Claus-Peter Klas and myself) ran a successful 3-day Dagstuhl Seminar on the Search as Learning topic with 26 participants. Our motivation:
Search engines have come a long way in the past few decades, enabling simple keyword search queries to be answered efficiently and with high quality. This efficiency, however, also means that we largely view search systems as tools to satisfy immediate information needs, instead of rich environments in which humans heavily interact with information content, and where search engines act as intelligent dialogue systems, facilitating the communication between users and content and foster knowledge acquisition. Users today are relying on search systems to explore, learn and reason about content. In recent years, there has been a growing recognition of the importance of studying and designing search systems to foster discovery and enhance the learning experience outside of formal educational settings. In this Dagstuhl Seminar, we bring together researchers from the fields of psychology, information retrieval, human computer interaction, library and information science and the learning sciences to discuss the following challenges & opportunities for search systems in providing support for learning:
The road to this seminar was quite long, 1.5 years from idea to reality:
- In October 2015 (I checked my email history) I was set on writing a Dagstuhl Seminar proposal and looked for likewise-minded colleagues.
- In November 2015 we submitted our first version.
- In January 2016 we got a a ‘reject-for-now-please-revise’ notification.
- In April 2016 we submitted the revised proposal.
- In July 2016 our proposal was accepted.
- At the end of February 2017 we ran the seminar.
- In September 2017 we put the finishing touches to the Dagstuhl report.
The last bullet point is the main reason for this blog post. The full text of the report is available here. The most interesting part of the report to me personally are the identified ‘search as learning’ challenges. We spent a large part of the seminar on breakout sessions were smaller groups of participants came together and found/discussed/explored the challenges. Here are a few challenges that I found inspiring:
-
Search query logs offer a very limited view into users’ minds; we have to make educated guesses on their learning intent, their prior expertise and their context based on noisy signals. In order to make strides into understanding learning we require large-scale data with more semantic meaning behind it. How would such data look like and how can we collect it at scale? This challenge also ties in with the question whether search should be at the centre of the investigation or ‘just’ one block in the ecology of learning.
-
How can we measure to what extent robust learning (learners achieving a deep conceptual understanding) or transfer learning (learners employing learnt concepts in novel situations) is taking place during the search process?
-
How can we help users that want to learn something but already struggle early on in the search process when formulating an initial query based on their information need? A common use case here are medical inquiries with users querying for symptoms in laymen terms (“pain in my side”).
-
In formal learning, scaffolding (the learning material is broken down into pieces and a structure is imposed on each piece) is an important part of lesson design. Is it possible to add automated scaffolding tools into the search process (beyond query suggestions/autocompletion)?
-
What impact does evolving knowledge (Is Pluto a planet? How many moons does Saturn have?) have on search as learning? How can we support users that are searching for information with search requests that are already providing a certain answer frame (e.g. “vaccinations are bad”, “climate change is not real”).
-
Can we build interfaces that externalize models/thinking in order to help users to self-reflect, callibrate, revise and collaborate?
-
How can we track what users do with the information after the search to make sense of the information, and recognize the value of the search system?
-
What hints can we give users about search results and search process toward benefits for learning (e.g. diversity, serendipity, discovery, other users trails)?
-
What scalable measures, based on search behaviours and document characteristics are good approximators of learning gains (and when is “good” good enough)? To what extent are measures task- and domain-specific? Across which periods of time (if we think for instance about sequences of queries across sessions) can we reliably measure learning gains?
-
Laboratory studies have often elaborate setups to measure learning gains. At the same time though, they tend to measure learning on a very small number of topics or via a specifically-designed search tool. To what extent can we generalize the results of these small-scale studies to other domains and tasks?
-
Retrieval practice (the repeated testing of knowledge) has been shown to be beneficial to learners. Can we integrate a retrieval practice component into the search process, given that today (at least Web) search has been designed to minimize the amount of duplicate information? An added benefit of such a component: the retrieval practice questions can act as probes to test understanding and learning progress.
-
Learning through failure: users may also learn when their information needs are not satisfied and their goals are not achieved. How can we deal with that?
-
How should a conversational agent (presumably trained automatically on vast quantities of text) deal with questions to which no clear consensus answer exist (“Does God exist”)?
-
Can we emulate how intelligent tutoring systems guide their users through the learning material? Can conversational agents guide users through those parts of the search space they have not considered before? If we do so, does that lead to more collective agreement on contentious issues in parts of the society (e.g. climate change)?
And this is only an excerpt of the list of challenges in the report!
Finally, I want to state something that may be obvious, but was not clear to me until recently (and now I regret the missed chances over the years): if you see a Dagstuhl Seminar that you are interested in and hope for an invitation (check the list of upcoming seminars), do not wait around. Be proactive. Email the organizers of the seminar and make them aware that you would like to join in. It is not uncommon for invited participants to cancel a few months/weeks before the seminar and spaces open up that the organizers are keen to fill. It does not matter if you are a beginning PhD student, a postdoc, faculty or an industry practitioner/researcher. Dagstuhl values a healthy mix of all kinds of seniority in participants. Simply ask.