Abstract Architecture

OpenLAP Architecture (Muslim et al. 2016)

The abstract architecture of OpenLAP shows the interaction between the three main components in OpenLAP, namely Data Collection and Management, Indicator Engine, and Analytics Framework.

The Data Collection and Management component in OpenLAP is responsible for collecting learning activities data from different sources adhering to the privacy policies of OpenLAP and generating the learner and context models from it. OpenLAP uses the data model called Learning Context Data Model (LCDM) in the frame of the Learning Context Project. LCDM is a compromise between IMS Caliper and xAPI. It represents a user-centric, modular, and easy to understand data model that holds additional semantic information about the context in which an event has been generated. To note, however, that OpenLAP can be adapted to work with IMS Caliper or xAPI-based data models (Muslim et al. 2016).

The aim of the Indicator Engine in OpenLAP is to achieve personalized and goal-oriented LA by following a Goal / Question / Indicator (GQI) approach that allows users to easily define new indicators through an interactive UI. Additionally, it provides an administration panel to manage the analytics modules, analytics methods, and visualization techniques in OpenLAP (Muslim et al. 2016).

The Analytics Framework is responsible for the management, generation and execution of indicators. It is the combination of the OpenLAP-DataSet, the Analytics Modules, the Analytics Methods, the Visualizer, and the Analytics Engine core components of OpenLAP (Muslim et al. 2016).

User Scenarios

Student Scenario

Amir is a computer science student at ABC University. He is interested in web technologies. He uses the open learning analytics platform to collect data from his learning activities related to this subject on the university LMS, the edX MOOC platform, Khan Academy, his blog, Facebook, YouTube, Slideshare, and various
discussion forums.

What Amir likes most about the open learning analytics platform is that it provides him the possibility to select which learning activities from which application can be collected in his profile. For Amir privacy is one of the big concerns. By default all the logged activity data are only available to him. He has, however, the option to specify which data will be publicly available to whom and for how long.

Amir is interested in monitoring his performance across the different platforms. He uses the indicator editor to generate a new indicator which aggregates marks from the university LMS, the peer-review feedback from the edX MOOC platform, and open badges from Kahn Academy. He specifies to visualize his marks compared to his peers as a line chart, his peer-review feedback in a textual format, and his badges as a list view. The platform then generates the visualization code that Amir can embed in the assessment module of the university LMS. Further, Amir is interested in getting recommendations related to web technologies in form of lecture slides, videos, online articles, blog posts, and discussion forums. He generates a new indicator which recommends him learning resources from different sources. He then embeds the generated indicator in the learning materials module of the university LMS. (Chatti et al. 2016)


Teacher Scenario

Rima is a lecturer at ABC University where she uses the university LMS to administer her courses. She uses personalized dashboard of the open learning analytics platform which gives her an overview of her courses using various indicators to augment and improve her teaching process. On the dashboard she has various
predefined indicators such as participation rate of students in lecture, students’ involvement rate in discussion forum, most viewed/downloaded documents, and the progress of her students in assignments.

Recently, Rima came up with the requirement to see which learning materials are more discussed in discussion forums. She looked in the list of available indicators but did not find any indicator which can fulfill this requirement. She opened the indicator editor which helps her in generating the new indicator and defining the appropriate visualization for this indicator. The newly generated indicator is also added to the list of available indicators for future use by other users. (Chatti et al. 2016)


Developer Scenario

Hassan is a researcher at ABC University. He developed a mobile application for collaborative annotation of lecture videos. He is interested in using the open learning analytics platform to analyze the social interactions of the application’s users. Based on the data model specification and guidelines provided by the open learning
analytics platform, he develops a new collector to collect activity data from his mobile application and send it to the platform. Further, he uses the indicator editor to define a new indicator which should apply the Gephi social network analysis method on the collected data. Unfortunately, this method is not available in the platform yet. Therefore, he uses the platform API to register Gephi as a new analysis method. Hassan goes back to the indicator editor and selects the newly registered analysis method to be applied in his indicator. (Chatti et al. 2016)

System Scenarios

Indicator Generation Process

Indicator Generation

The user (e.g. learner, teacher, researcher) starts the indicator generation process by interacting with the Indicator Engine UI and defining the LA goal such as monitoring, prediction, assessment, and reflection. Based on the defined goal, the user asks the LA question and associates indicators to answer this question. To define a new indicator, the user performs the following steps:

  • Explore the learning event data stored in the LCDM format.
  • Select an appropriate dataset for the indicator.
  • Select an appropriate dataset for the indicator.
  • Apply different filters.
  • Choose the analytics method to be applied.
  • Map the dataset to the inputs of the analytics method.
  • Choose the visualization technique to be used.
  • Map the outputs of the analytics method to the inputs of the visualization technique.
  • Preview the visualization of the indicator.

After the indicator is finalized, the Analytics Engine validates the indicator and saves it in the Analytics Modules as a triad containing a reference to the indicator query, a reference to the chosen analytics method, the visualization technique to be used, as well as the mappings indicator query - method and method - visualization. The user then gets an HTML and JavaScript based indicator request code containing the triad identifier. This indicator request code can be embedded in any client application (e.g. Web page, dashboard, LMS).

In the indicator generation process, the Indicator Engine constantly communicates with the Analytics Engine to get the dataset, get the list of possible analytics methods which can be used, get the possible visualization techniques which can be applied, validate the specified mappings, and generate the indicator request code (Muslim et al. 2016).

Indicator Execution Process

Indicator Execution

To visualize the indicator, the indicator request code embedded in the client application initiates the communication with OpenLAP. The Analytics Engine intercepts the request and performs the following steps:

  • Check whether the request is valid or not.
  • Communicate with the respective Analytics Module to get the triad.
  • Get the query related to the requested indicator from the database.
  • Execute the query and get the raw data.
  • Transform the raw data into the OpenLAP-DataSet.
  • Send the OpenLAP-DataSet and the mapping indicator query - method to the analytics method referenced in the triad for analysis.
  • Receive the analyzed data as an OpenLAP-DataSet.
  • Send the OpenLAP-DataSet and the mapping method - visualization to the visualization technique referenced in the triad.
  • Receive the indicator visualization code and forward it to the requesting client application to visualize the indicator.
(Muslim et al. 2016)

System Workflow