The MODE project

Digital technologies are profoundly changing society. These changes create significant challenges and opportunities for social scientists. An education researcher, for instance, now asks: How can I investigate the interactive whiteboards and visualizers used in schools to facilitate learning? A sociologist may ask: How do digital technologies change the way people organize their work? They may both ask: How can we make use of the digital artefacts that the people we study produce such as websites, games, mobile devices, virtual environments, ‘touch based’ technologies, and videos, as well as the digital data automatically produced by technologies, such as CCTV video recordings?

Multimodal methodologies offer ways to address these urgent and timely questions by attending to what people say, write, draw, design, and look at, how they lay out and navigate through rooms, websites and other spaces, how they use their hands and other parts of their bodies to interact with computers, devices and other people in face-to-face encounters. In short: multimodal methodologies allow social scientists to study how people behave and interact in contemporary digital environments.

MODE is a node of the National Centre for Research Methods and funded by the Economic and Social Research Council. It develops multimodal methodologies for social scientists, providing systematic ways to investigate all modes of communication used in digital environments, whether they are sites of learning, work, or ‘social’ sites (e.g. Facebook). MODE is focused on five themes:

  1. Video and other digital data: How to gather materials such as video-recordings and logs of people’s online presence and how to systematically analyze these materials using multimodal methodologies?
  2. Multimodal transcription: How to present materials such as video and analysis of the video to social scientists when writing up research findings in a journal article?
  3. Multimodal theories and methods: How to reconcile different perspectives and different sources of evidence on materials? For instance, a video recording of someone browsing the internet and a log capturing mouse movements.
  4. Researching space, place and time: How do you make sense of the activities that people engage in if they take place in an unfixed time and space frame? For instance, responses to a video posted on YouTube.
  5. Technology and embodiment: How do we investigate people’s physical or virtual co-presence in digital envirionments? For instance, how do we make sense of the use of ‘avatars’ in virtual worlds which extend the sense of a person’s body.

We address these themes in the context of the following substantive areas:

The objectives of MODE are to

  1. Establish a strategic focal point for the development, delivery and dissemination of multimodal methodologies, training and capacity building;
  2. Set up and carry out research in digital environments to try out and develop new multimodal methodologies;
  3. Provide a coherent program of training and capacity building activities in multimodal methodologies for social science researchers;
  4. Build a social science research community that enhances the UK’s profile and leading position in multimodal methodologies and digital technologies.

MODE offers a programme of training and capacity building activities and research projects. These include seminars, lectures, introductory courses and summer schools, and online discussion and support for early and mid career social scientists from a range of academic disciplines, government, private sector organisations and the public sector. The two research projects provide a testing ground for new multimodal methodologies and focus on how people use new technologies in different digital environments. One research project is located in an operating theatre, where screen technologies are used to look inside people’s body cavities, and another research project in a classroom, where Geographic Information System technologies and touch screen interactive tables are used to facilitate learning.