Last update on .

Some preliminary ideas regarding specifications and design inside the WE-COLLAB project

The project WE-COLLAB addresses challenges that often characterise poorly designed online courses, such as ‘Zoom fatigue’, higher levels of distractibility and information overload.
The task this article refers to, is the development of a Student Feedback App, which is part of the first result (PR1) of the Erasmus+ WE-COLLAB project.

Herebelow, on behalf of PR1 and of LINK in particular, I present some preliminary ideas regarding specifications and design. I start from the consideration that, even though a lot of buzz is heard about student engagement, what we are looking for does not already exist; for example, here I have posted a quick critical review of Wooclap, a product partly addressing similar goals.

Objectives

As to our objectives, these are excerpts from the WE-COLLAB project proposal:

It is planned to develop an interactive mobile app for providing voluntary, real-time feedback to the teacher during online lectures, with the mediation of an extension of the CommonSpaces platform, being invoked through a HTTP API, which will forward xAPI statements to an LRS [...] Its use will be especially useful where it won’t be possible to perform monitoring based on neurophysiological sensors [addressed in PR4 ... however] it will be even more interesting and innovative if used in parallel to experimentation with sensors [...] since this will allow to do, off-line, a fine grained comparison of emotional and cognitive states as drawn from automated analysis of sensor data and from subjective statements.

Methodology

Ideally, the “vocabulary” of said statements should be consistent with the classification of the emotional and cognitive states to be identified by studies in PR4 and PR5, precisely for the purpose of enabling some comparison and integration of the data collected from different sources. However, it seems that those studies require much more time than I imagined, to be completed; then we have to anticipate something, based on common sense and on the review of other applications, including videoconference tools. For example:

  • besides several modes of proposing questions and polls to the student (more in general, to the event participant), in Wooclap there is only one button, labeled I'm confused, for sending feedback on one's cognitive/emotional state;
  • the Zoom platform allows for non-verbal feedback; 'persistent' reactions, to be canceled when no more appropriate, can be: Yes, No, Slow down, and Speed up; Zoom also allows meeting participants to use 6 standard meeting reaction emojis, disappearing after 10 seconds, which represent: Clapping Hands, Thumbs Up, Heart, Tears of Joy, Open Mouth, Party Popper , besides a large number of non-standard emojis;
  • in the xAPI verbs Registry initially created on behalf of Advanced Distributed Learning Initiative (ADL), we found only three verbs which, in quite generic way, could refer to the act of providing feedback on one's cognitive/emotional state: commented, reported and liked; as to the activity types section of the xAPI Registry, we found similarly generic entries: alert, question, doubt and suggestion;

Our current idea is to use only one verb, possibly send, and only one object type, possibly alert, and to use an extension slot of the xAPI statement to add a parameter classifying the type of user reaction. For defining the set of acceptable values of said parameter, including its size, we need a discussion involving the project partners. 

Some methodological challenges

This is the zero version of the feedback messages repertoire to be sent via a virtual keypad:

go on! slower louder
pause repeat explain
(need) context (need) example (need) recap

 

The 9 reaction types above follow the style of the 4 persistent reactions allowed by the Zoom platform (Yes, No, Slow down, and Speed up). They are mostly performative expressions: it might not be trivial to put them in relation with the data on the (cognirive and emotional) inner states of the student, which are expected to be extracted from the neurophysiological signals, in other strands of the WE-COLLAB project. But, possibly, this challenge could add interest to the research.

In any case, in this article we address mostly the technical aspects of the design. Other partners, with more competence and resources in the fields of psychology and pedagogy, may suggest a number of corrections and additions; for example:

  • it would be possible, and useful, to define a more 'introspective' type of vocabulary?
  • is the vocabulary too numerous (and confusing) for the the student (more generally the participant in the event)?
  • is the vocabulary too numerous and/or heterogeneous from the teacher's point of view, if a raw stream of reactions was to be presented to him/her?
  • in the case, which could be an alternative vocabulary?
  • what data aggregation could be carried out to complement such a data stream with a continuously updated synthetic view, say a dashboard, possibly including suggestions for the lecturer?

Architecture 

The main components of the system architecture that we envisage are:

  1. a mobile app for an Android smartphone, collecting input from students attending a lecture, or, more generally, from people attending an event;
  2. a Learning Record Store (LRS), receiving xAPI statements via CommonSpaces, in the role of integration platform;
  3. an integration platform which: a) receives messages from the mobile app, b) forwards them to the LRS in proper format, c) aggregates received data and visualizes them in real-time, besides adding them to the native activity stream of CommonSpaces.

 

The Android app

The mobile app could accept user input of two types:

  • choosing from the above mentioned 'vocabulary', and sending, one of N predefined standard messages, each representing a different subjective state, by pressing the associated button from the virtual keyboard;
  • entering and sending a short free text message, possibly in response to a prompt from the lecturer.

In both cases, the mobile app will communicate with the web platform using HTTP api. We plan to implement it using the DroidScript IDE, which generates mostly Javascript code.

The LRS

The LRS receives the messages from the mobile app via the web application, in the form of xAPI statements, and saves them for processing offline by Learning Analytics (LA) functions, and/or for comparison and integration with data collected from other sources, such as neurophysiological sensors and LMS.

The integration platform

This is a web platform which:

  1. receives messages from the mobile app on HTTP-REST api endpoints;
  2. sends them to the LRS after mapping them into xAPI statements;
  3. filters them and saves them in the CommonSpaces native activity stream;
  4. filters, aggregates and visualize them, in the form of a continuously growing list and a dashboard, to provide real-time feedback to the teacher or lecturer.

Technical details and challenges

The integraton platform will be an upgrade of the Commons Platform, on which CommonSpaces and the WE-COLLAB mini-site were built; in turn, the Commons Platform relies on the Django framework, of which the Django REST Framework (DRF) is a well-known extension.

The LRS will be a deployment of Learning Locker.

The major challenges we currently see are:

  • the development of the Android app: the DroidScript IDE is very ingenious, but this is the first time we use it;
  • the upgrade of the Commons Platform, to enable it to create Web Sockets; these are required to establish asynchronous communication channels with the browser, to ensure the real-time refresh of the teacher/lecturer display.

Comments

Comments are closed.