E-learning is a general term traditionally related to learning processes in which electronic communication technologies are used, specifically the Internet and its services, such as e-mail, web pages, forums, learning platforms, and so on. The literature contains many definitions of e-learning, but the most representative works categorise these definitions from three perspectives: distance learning, technological and pedagogical. Any pedagogical scenario adopting electronic media to provide full courses that do not require the geographical presence of students can be considered as e-learning. In this regard, a common application of e-learning is the use of virtual courses that provide some advantages for the educational environment with respect to the classical attendance group. Among the well-known advantages of e-learning scenarios, we highlight: the flexibility in terms of access to the learning resources from many places and at any moment; the possibility of performing e-learning activities remotely, for example, to use hardware or software that is accessible using the Internet; the enabling of self-pacing; and the higher number of students that gain access to the learning contents. Thus, e-learning is an essential tool for a smart campus to satisfy the needs of students geographically located in remote places. Due to flexibility and scalability required to e-learning and, in particular, virtual courses, the e-learning infrastructure and architecture is commonly supported by a cloud computing environment. However, virtual courses that provide experimentation activities based on remote laboratories (also known as remote labs) must use a mixed environment; the remote labs with a minimal required infrastructure are deployed at the educational institution and the rest of components are located in the cloud computing paradigm. Specifically, remote labs are online resources that can be used within a virtual course to allow remote students to control physical laboratory instruments and perform real measurements by using only a web browser and without any other dedicated application.
- when a student requires access to an instrument that is currently in use, he/she has to wait until it is available again.
- measurements can only be done by instruments available in specific laboratories (like optical, electrical or electronic).
In this context, we can find multiple remote labs (lab farms) where each one is only dedicated to a particular experiment, even when they have similar instruments or infrastructure. By considering this fact and the end-user demand fluctuation, the worst case is when a specific lab is required by a large number of students. On the one hand, it will generate a long waiting queue of students and on the other hand, some remote labs will be underused. From our point of view, there is a clear necessity of improving the efficiency and resource optimization of lab farms through the instrument re-usability and adaptability. A lab farm should be an online service that uses existing assets to deploy and provide a remote lab in an automatic and on-demand way.
Need to use Self-Organised Laboratories with Software-Defined Networking and virtualisation:
The adaptability of lab infrastructures requires the automation of management processes, currently performed by lab operators. Because of this, new technologies must be involved in the deployment of remotes labs. In the sense, we propose the use of a Self-Organised Laboratory, with the goal of moving from traditional manual management towards fully autonomic and dynamic processes that require no human intervention. This can be done through the right application of two new technologies. Firstly, and to reduce network management complexity, the Software-Defined Networking (SDN) paradigm can help adaptive labs to automatically manage and orchestrate the network resources by taking into account the situational awareness of the underlying network given at any time. Furthermore, combining the SDN paradigm with Virtualised Infrastructure (VI) and, in particular, Network function virtualisation (NFV) techniques, it is possible to decouple the software implementation from the underlying hardware, providing and enhancing the flexibility and optimisation of the lab resources management.
By taking into account the potential of the previous technologies, we propose a novel architecture that combines Cloud computing and SDN paradigms, with NFV techniques to monitor and orchestrate the whole lifecycle of physical remote labs (Fig. 1). The proposed architecture enables the flexible and efficient management of the elements that compose the physical remote lab with the goal of improving its availability and ensuring the quality of service (QoS). The use of the Cloud computing and SDN paradigms, together with NFV techniques allows our solution to deploy, configure, and control the whole lifecycle of the components making up the remote lab framework as well as its network communications.
Fig.1. SDN/NFV architecture for managing remote laboratories.
The architecture has been deployed in a realistic scenario where online users (teachers and students) have access to a learning space through a Learning Management System, which is allocated in the cloud (Fig. 2). Firstly, teachers create and manage virtual courses based on remote laboratories. After that, students can enrol in the virtual courses and use remote laboratories like learning activities defined into the virtual course. Remote laboratories comprise both virtual and physical resources that are provided by educational institutions in the edge of the network.
Fig. 2. Scenario for providing self-organised remote laboratories.
Advantages of the proposed solution:
- Optimise the usage of hardware and software resources making up remote labs through automatic and on-demand resource management.
- Reduce the experiments deployment time by using alternative equipment and virtualisation technologies.
- Reduce the manual management and improve the labs autonomy.
- Ensure the QoS when the lab is deployed to enhance the users’ quality of experience.
Use case highlighting added value of the proposal:
- Communications configuration. The exchange of data may consume significant network resources in experiments where the update rate is high. As a consequence, in existing solutions, the status of the network can change during the experiment, causing delays or losses in communications, and reducing the QoS. With our solution this situation is addressed by monitoring the network, changing the remote lab configuration and ensuring the QoS.
- Experiment configuration. Hardware equipment is shared between different experiments and even labs. In contrast to existing approaches, using our solution users do not need to tune multiple parameters to configure the remote hardware for one experiment. This software configuration is loaded during the deployment, according to the experiment requirements.
- Concurrent access: Not all existing remote labs allow being used simultaneously by different users. Our solution enables the multiuser experimentation in two ways: the deployment of the experiment, and the dynamic configuration of the access role of the user, as controller or viewer. Users with the role of viewer can only observe the changes in the lab until they are allowed to be controllers. Both elements work together to ensure the quality of the multiuser experience.
Some experiments have been performed with the goal of:
- Measuring the time required by our architecture to deploy a virtualised function providing an experiment.
- Evaluating how the number of students using a given experiment affects the time required by our architecture to deploy an experiment.
Regarding the first question, Table 1 shows the times obtained during the deployment of virtualised functions running on top of different virtualisation techniques and hardware infrastructures
Table 1. Deployment Time of a New Experiment.
Fig. 3 shows the impact of using different hardware and virtualisation techniques to deploy new experiments in remote labs. In this context, the fact of using Dockers running on top of powerful hardware is the best combination to provide good performance in terms of the number of simultaneous experiments and the time required to deploy new ones. The decision of using VMs or Dockers to deploy learning experiments affects the users’ quality of experience, being Dockers the alternative that enable more simultaneous experiments while consuming less time to deploy them. However, it is important to consider other important aspects like the platform and the purpose of each experiment. In this sense, if the software to control the remote labs and their experiments are implemented in Windows, VMs are a better alternative because containers are oriented to Linux operating systems. In addition, containers are also more exposed to attack vectors than VMs are. Using containers requires to take actions to secure them such as reducing users’ privileges or running services as a non-root user.
Fig. 3. Time to deploy a new learning experiment according to the number of existing users. (Left) Deploy a VM on the Personal Computer. (Center) Deploy a Docker on the Personal Computer. (Right) Deploy a Docker on the Raspberry Pi 3.
Publication: Self-Organised Laboratories for Smart Campus
Authors: Alberto Huertas Celdrán, Félix J. García Clemente, Jaboco Saenz, Luis De La Torre, Christophe Salzmann, and Denis Gillet,
Journal: IEEE Transactions on Learning Technologies