This experiment is done by the module, Performance Predictor. The Performance Predictor tends to predict the future term marks based on the past term marks. For instance if the past term marks are available (grade 10's 1st 2nd and 3rd term and grade 11's 1st and 2nd term) then the last term (11's 3rd term) could be predicted. Please note that the predicted result is the Grade (A, B, C etc.) thus it's technically a classification. This task can be done for each subject as there are models which are built per subject. Number of classification techniques are used thus there exist many models per every subjects and the experiments below are tend to assess the models as well as trace the previously unknown or other significant information related to that subject in that school. There are more than a hundred different number of experiments (different combinations, per subject, per school, integrated school, difference in standardization, the term that is predicted [1st or 3rd etc.] and different models).

The experiment explores the subject Science and Technology in a particular school (School 1) and then among two schools (School 1 and School 2) via evaluating the models which predict 6th term in Ordinary Level (that is the 3rd term of the grade 11).

The outcomes are plotted in radar graphs below which are capable enough to store ample of information, in a single dimensional view. For more clarity, because this is the very first illustration, the result tables that are used to produce the graphs too are presented next to the respective radar plots. This case encompasses 48 different combinations which are nicely shown on the above radar graph, figure 1.

There are 4 different bins which are defined (refer section CLASS IMPLEMENTAION regarding the 'bins'), 3 ways of handling the missing values and 4 different algorithms result in those 48 combinations. The replacement techniques and algorithms are meaningfully depicted in the table.

- When the number of bins is two the accuracy falls to the highest side and it lays around 79% to 81%.
- Although the Algorithm Naive Bayes has marginal ups and down around the radar, when it comes to 2 bins it's clearly the winner, considering the accuracy.
- According this particular settings, for this sample the handling of missing values doesn't the accuracy in a significant manner (but with the number of instances the significance may increases)
- When the number of bins are 5 and 4 the accuracy lies in a small range around 60% and the number of bins 3, 2 enjoy drastic increases.
- The time taken to build the model for MLP (Multilayer Perceptron) is comparatively very high. Whereas other 3 algorithms used take in milliseconds the MLP takes minutes to perform.
- Among other three algorithms other than MLP, the AdaBoost M1 takes marginally higher building time.

- When the number of bins reduces from 5 to 2 the accuracy drastically changes. It leaps up from 55% to 81%.
- The modelling time consumption of MLP drastically changes with the number of bins that are used. From 90 seconds as a maximum at 5 bins to a minimum of 10 seconds at 2 bins.

- For the data from School 2, the accuracy didn't depend that much on the missing value handling methods.
- Although the Algorithm Naive Bayes has marginal ups and down around the radar, when it comes to 2 bins it's clearly the winner, considering the accuracy.
- According this particular settings, for this sample the handling of missing values doesn't the accuracy in a significant manner (but with the number of instances the significance may increases)
- When the number of bins are 5 and 4 the accuracy lies in a small range around 60% and the number of bins 3, 2 enjoy drastic increases.
- The time taken to build the model for MLP (Multilayer Perceptron) is comparatively very high. Whereas other 3 algorithms used take in milliseconds the MLP takes minutes to perform.
- Among other three algorithms other than MLP, the AdaBoost M1 takes marginally higher building time.

- When the number of bins reduces from 5 to 2 the accuracy drastically changes. It leaps up from 55% to 81%.
- The modelling time consumption of MLP drastically changes with the number of bins that are used. From 90 seconds as a maximum at 5 bins to a minimum of 10 seconds at 2 bins.

- For the data from the combined schools, when the bin number is 5, the accuracy is high when the missing values are replaced by the mean value.
- MLP is marginally lower in terms of accuracy (unless the number of bins is 2) and the time taken to build the model is comparatively too high. Thus it is off the competition.
- For practical purposes, the team is very much interested in number of bins, 3 and 4 (2 bins are prone to high ambiguity and 5 bins have impractical accuracy). Thus Naïve Bayes is the marginal winner considering the accuracy.
- According this particular settings, for this combined sample the handling of missing values doesn't the accuracy in a significant manner unless the number of bins is 5. That is the influence gets less and lesser at the number of bins is reduced.
- When the number of bins are 5 and 4 the accuracy lies in a small range around 60% and the number of bins 3, 2 enjoy drastic increases.
- Again, the time taken to build the model for MLP (Multilayer Perceptron) is comparatively very high. Whereas other 3 algorithms used take in milliseconds the MLP takes minutes to perform.
- Among other three algorithms other than MLP, the AdaBoost M1 takes marginally higher building time.

- When the number of bins reduces from 5 to 2 the accuracy drastically changes. It leaps up from ~55% to ~85%.
- Again, the time taken to build the model for MLP (Multilayer Perceptron) is comparatively very high. Whereas other 3 algorithms used take in milliseconds the MLP takes minutes to perform and it drastically drops when the number bins is reduced from 4.

- When standardized School 1 data are been used, the Algorithms display notable differences between the accuracies. AdaBoost M1 and Naïve Bayes are marginally better than the other two.
- Compared to Non-Standardized data, the standardized data is marginally lower in accuracy (surprisingly), considering this set of sample from School 1.
- The accuracy tends to be higher when the missing values are handled by the Algorithm (Weka) or by replacing mean rather than removal of those missing value rows. And the missing handling's influence gets less and lesser at the number of bins is reduced.
- MLP is lower in terms of accuracy (unless the number of bins is 2) and the time taken to build the model is comparatively too high. Thus it is off the competition.
- According these particular settings, for this standardized sample of the school 1, the handling of missing values doesn't the accuracy in a significant manner unless the number of bins is 5.

- When the number of bins reduces from 5 to 2 the accuracy drastically changes. It leaps up from ~45% to ~85%.
- Again, the time taken to build the model for MLP (Multilayer Perceptron) is comparatively very high. Whereas other 3 algorithms used take in milliseconds the MLP takes minutes to perform and it drastically drops when the number bins is reduced from 4.

- When standardized School 2 data are been used, the Algorithms displayed notable differences between the accuracies. While the number of bins is less than 4, the J48 is less in accuracy and the Naïve Bayes is the clear winner among all four.
- Compared to Non-Standardized data, the standardized data is notably lower (when the number of bins are 2 and 3) in accuracy (surprisingly), considering this set of sample from School 2.
- The accuracy tends to be higher when the missing values are handled by the Algorithm (Weka) or by replacing mean rather than removal of those missing value rows. And the missing handling's influence gets less and lesser at the number of bins is reduced.
- MLP is lower in terms of accuracy (unless the number of bins is 2) and the time taken to build the model is comparatively too high. Thus it is off the competition.

- When the number of bins reduces from 5 to 2 the accuracy drastically changes. It leaps up from ~20% to ~88%.
- Again, the time taken to build the model for MLP (Multilayer Perceptron) is comparatively very high. Whereas other 3 algorithms used take in milliseconds the MLP takes minutes to perform and it drastically drops when the number bins is reduced from 4. When the number of bins is 3 and the missing values are handled by the MLP itself the time taken has a steep increase.

- When standardized combined school data are been used, the Algorithms displayed similar accuracies with the other parameters.
- Compared to Non-Standardized data the standardized combined data outcome is notably lower (when the number of bins are 2 and 3) in accuracy (surprisingly), considering this set of sample from the combination of the Schools.
- Missing value handling seems not be influencing the accuracy, according this particular sample and settings.
- MLP is lower in terms of accuracy (unless the number of bins is 2) and the time taken to build the model is comparatively too high. Thus it is off the competition.

- When the number of bins reduces from 5 to 2 the accuracy drastically changes. It leaps up from ~40% to ~88%.
- Again, the time taken to build the model for MLP (Multilayer Perceptron) is comparatively very high. Whereas other 3 algorithms used take in milliseconds the MLP takes minutes to perform and it drastically drops when the number bins is reduced from 4.

This experiment is done by the module, Exam Comparator. The Exam Comparator tends to measure the standard of schools' examinations against the general examination conducted by the Sri Lankan government annually. It is applicable to grade 11's 3rd term and grade 13's 3rd term examinations, because the government conducts two general exams, one for Ordinary Level and one for Advanced Level.

As general examinations' results are not available to us at the moment, grade 11's 3rd term examinations are considered as dummy general examinations and grade 11's 2nd term examinations are considered as 3rd term examinations and compared against dummy general examinations to check whether they are agree on standards.

Jaccard Similarity is = 89.65517241379311

Sequence alignment similarity score = 93.59057807895084

According to explanation given in the chapter CLASS IMPLEMENATION under Exam Comparator Module, possible interpretations are,

- School 1's Ordinary Level's Science and Technology exam is standard enough to general exam for the given batch.
- Students appear to perform more similar in both examinations, and teachers are fair regarding examinations and grades

Jaccard Similarity is = 79.3103448275862

Sequence alignment similarity score = 89.90365739269382

According to explanation given in the chapter CLASS IMPLEMENATION under Exam Comparator Module, one possible interpretation would be,

- School exam differs slightly from general exam.

...

...

CLASS is a research and development project initiated as a final year project by team Arima at
Department of Computer Science Engineering, University of Moratuwa.
The project mainly focuses on finding the ways to adopt Learning Analytics in Sri Lankan schools to assess
learning progress, predict performances and track potential issues.