Skip to content

New MOOC: Data, Analytics, & Learning

I’ve run a range of open courses on a fairly broad range of platforms: D2L, Moodle, Instructure, a mess of social media tools, and (most frequently) with Stephen Downes’ gRSShopper.

This fall, together with colleagues, I’ll be offering an open course on edX: Data, Analytics, and Learning. From the description:

In education, the use of data and analytics to improve learning is referred to as learning analytics. Analytics have not yet made the impact on education that they have made in other fields. That’s starting to change. Software companies, researchers, educators, and university leaders recognize the value of data in improving not only teaching and learning, but the entire education sector. In particular, learning analytics enables universities, schools, and corporate training departments to improve the quality of learning and overall competitiveness. Research communities such as the International Educational Data Mining Society (IEDMS) and the Society for Learning Analytics Research (SoLAR) are developing promising models for improving learner success through predictive analytics, machine learning, recommender systems (content and social), network analysis, tracking the development of concepts through social systems, discourse analysis, and intervention and support strategies. The era of data and analytics in learning is just beginning.

I’ll provide more information soon about the design of the course – we are focusing on dual structured and self-organized approaches to the course.

MOOCs: Scaling Corporate Learning

While MOOCs have gained the interest and attention of higher education, they have failed to make much of an impact on corporate learning. That is starting to change. Over the past year, organizations such as Google, GE, Cisco, McAfee, Bank of America, AXA, and AT&T have started to experiment with MOOCs. Non-profits and NGOs such as Linux Foundation, WEF, OECD, Red Cross, and others have also started experimenting with large scale online learning.

In trying to get a pulse of MOOCs in corporate learning, I’ve found it difficult to get a sense of what is happening broadly with MOOCs outside of higher education. I catch press releases and partnership announcements, but the conversation is too fragmented to provide a sense of lessons being learned and various implementation model. The sharing of practices and experiences in higher education is more prominent.

To address this lack of dialogue in this space, we are organizing an online conference on MOOCs: Scaling Corporate Learning. The event is free/open and will be held June 18 & 19. Registration is now open. The schedule and speaker list will be posted later this week.

MOOCs: Expectations and Reality

In spite of (because of?) significant media attention, the dialogue around MOOCs has been more theoretical than informed. Research is lagging well behind rhetoric. Fortunately, that is starting to change. Fiona Hollands and Devayani Tirthali from Teachers College, Columbia University, have released what is the most informed analysis of MOOCs that I have read to date: MOOCs: Expectations and Reality (.pdf). My only quibble is with the attempt of Andrew Ng (Coursera) to rename xMOOCs as Modern MOOCs. It’s a language game (“Freedom Fries”) that belies what anyone acquainted with learning sciences knows: the xmooc format is a pedagogical regression. Very little modern about it.

From the report:

To date, there has been little evidence collected that would allow an assessment of whether MOOCs do indeed provide a cost-effective mechanism for producing desirable educational outcomes at scale. It is not even clear that these are the goals of those institutions offering MOOCs. This report investigates the actual goals of institutions creating MOOCs or integrating them into their programs, and reviews the current evidence regarding whether and how these goals are being achieved, and at what cost.

Personal Learner Knowledge Graph

The entire education system is focused on content/curriculum. Content drives almost all academic conversations. Content is the work of designers (how should we structure this), academics (what and how should I teach), administrators (how can we prove [to some random agency] that we taught students stuff that matters), and employers (this is the content I want potential employees to master).

The content view of learning is deeply embedded in our thinking at all stages of the education system. It’s so ingrained that it is hard to NOT start a learning conversation without content as the focal point.

This content fetish is the heart of what is wrong with education. The big shift that needs to be made in education is to shift from knowing content to knowing learners. This isn’t a pablum-like argument for learner-centric education (this concept, again, starts with content, but gives lip service to learners).

What is needed in education is something like a Personal Learner Knowledge Graph (PLKG): a clear profile of what a learner knows. It doesn’t matter where the learner learned things – work, volunteering, hobbies, personal interest, formal schooling, etc. What matters is that learners are aware of what they know and how this is related to the course content/curriculum. In a sense, PLKG is like the semantic web or Google Knowledge Graph: a connected model of learner knowledge that can be navigated and assessed and ultimately “verified” by some organization in order to give a degree or designation (or something like it).

If the education system can make the transition to learner knowledge graphs, instead of mainly content, the system can start to be far more intelligent than it currently is. For example, if I’m a student who spends summer months idly consuming beverages, I will develop a different skill set than someone who spent their summer volunteering and working (see video below for a discussion I had with Steve Paikin on the Agenda). Yet when the two of us start university in fall, the system normalizes our knowledge to the curriculum. We get the same content even though we are different people with completely different skills and knowledge.

IF a learning system is based on a learner knowledge graph, the career path alone would be greatly enhanced – a learner should know where he is in relation to a variety of other fields based on the totality of his learning (i.e. “this is your progress toward a range of careers”). I’ve tried to somewhat crudely communicate this in the image below.

Video from The Agenda:

Multiple pathways: Blending xMOOCs & cMOOCs

I’m running a MOOC on edX in fall on Data Analytics & Learning (registration will be open soon). As part of this process, we organized a designjam recently bringing together 20 or so folks to think through the design process. I’ll post separately on this event. For now, I just want to highlight one aspect of the meeting: the difference between xMOOCs & cMOOCs and possible ways to blend them.

The interest in making xMOOCs more like cMOOCs (a few silly folks have called it MOOC 2.0 – haha) seems to be growing. In particular, MOOC providers are adding “social” in the same way that vitamins are added to food, “Now, with beta-carotene”! After much discussion at our designjam, I’ve concluded that cMOOCs and xMOOCs are incompatible. They cannot be blended. Pedagogically and philosophically, they are too different. It’s like trying to make a cat a dog. Entertaining, perhaps, but a fruitless venture.

Where I think xMOOCs and cMOOCs can work together is as parallel tracks where learners can navigate from one approach to another. During the designjam, I described this as needed pathways based on learner needs at different time in their learning. For example, when I engage with a new content area, I enjoy some structure and guidance. At other moments, I have random urges to create things. Learners should have freedom to bounce between structure and unstructured pathways based on personal interest.

Matt Crosslin captures these concepts in his blog post (and image below):

Journal of Learning Analytics

Interest in learning analytics is growing. It’s a data centric world and will only become more so in the future. From my biased view, it is critical that educators are aware of the role of analytics in education because of the heaving influence algorithms, data, and analytics have on teaching, learning, and decision making in schools, colleges/universities, and corporate settings.

SoLAR just announced the inaugural issue of Journal of Learning Analytics. It is an open access journal. If you’re interested in data and analytics in learning, this is the journal for you! I have a short introduction to SoLAR and the main activities of the organization and the role (we hope) it plays in bringing together technical and social domains of learning.

Thoughts on Connectivism

Stephen Downes has posted some thoughts on connectivism.

David Wiley replies, saying while interesting, connectivism is incomplete (which I think is great – if it’s complete, we can stop working on it).

In 2008, I posted a short presentation on the various ways in which learning is networked: neural, conceptual, and external.

The defining attribute of connectivism as a theory is that it can explain learning at the biological, conceptual, and interaction level using the same language throughout. Learning biologically is about connection forming. At the conceptual level of knowledge development, it is about connecting and bringing concepts in relation to others. At the physical and external level, it is about social and technological interactions and connections. The connections in the brain and around concepts and around social/technological networks share similar attributes (hubs, tie strength, clustering, etc). While we don’t yet know what that means at a biological/neuronal level, it is clear that these attributes influence how we connect to others, use technology, and develop knowledge. While connectivism was initially seen as a lens of viewing learning, it has developed to the point where actionable pedagogical tactics, self-regulated learning, and design principles can be used to developed connected learning.

Open Learning Analytics

The future of systems such as business, government, and education will be data centric. Historically, humanity has made sense of the world through discourse, dialogue, artifacts, myth, story, and metaphor. While those sensemaking approaches won’t disappear, they will be augmented by data and analytics.

Educators often find analytics frustrating. After all, how can you analyze the softer aspects of learning? Or can analytics actually measure what matters instead of what is readily accessible in terms of data? These are obviously important questions. Regardless of how they are answered, however, ours is a data-rich world and will only continue to become more so. All educators need to be familiar with data and analytics approaches, including machine and deep learning models. Why does it matter? Well, to use a Ted Nelson quote that Jim Groom used during his excellent talk at Sloan-C this week, it matters “because we live in media as fish live in water”. Power and control decisions are being made at the data and analytics level of educational institutions. If academics, designers, and teachers are not able to participate in those conversations, they essentially abdicate their voice.

About five years ago, a few colleagues (Shane Dawson, Simon Buckingham Shum, Caroline Haythornthwaite, and Dragan Gasevic) and I got together with a great group of folks and organized the 1st International Conference in Learning Analytics and Knowledge (complete with a logo that any web users of the 1990s would love). Our interest primarily focused on the growing influence of data around educational decisions and that an empirical research community did not exist to respond to bold proclamations being made by vendors about learning analytics. Since then, a community of researchers and practitioners has developed. The Society for Learning Analytics Research was formed, hosting summer institutes, our annual conference, journal, and a distributed doctoral research lab.

Today we are pleased to announce two new initiatives that we feel will raise the quality of learning analytics, increase transparency around data and algorithms, and create an ecosystem where results can be shared, tested, and validated:

1. Open Learning Analytics. This initiative is based on a paper that we published (.pdf) several years ago. After significant behind-the-scenes work, we are now ready to announce the next steps of the project formally. See here for press release and project scope.

2. Learning Analytics Masters Program (LAMP). The number of masters programs that are offering learning analytics courses, streams or certificates is increasing. Several institutions are in the process of developing a masters in learning analytics. To help provided quality curriculum and learning resources, we have launched LAMP: an open access, openly licensed learning analytics masters program. Institutions will be able to use/remix/do whatever with the content in developing their masters programs. Our inaugural meeting is being held at Carnegie Mellon University in a few weeks to kick off this project and start developing the course content.

If data is the future of education and educational decision making, and in many ways it is, I believe openness is the best premise on which to advance. The projects presented here are our contribution in making that happen.

What will universities monetize in the future?

Universities do more than teach. Research is one of the most important activities of higher education. From the lens of students and society, however, the teaching and learning process and what it costs, is the primary focus.

The university economic and operational structure, in relation to educating learners, can be seen as consisting of three legs of a stool: content/curriculum, teaching, and assessment. The past decade has not been kind to higher education’s economic model as two legs of the stool – content and teaching – have started to move toward openness. Academic resources can now be found from top universities around the world. If I was tasked with designing a course from scratch, I would start by searching repositories, rather than creating any new content.

More recently, the teaching leg of the stool is seeing stress. Open online courses now make lectures of faculty from elite universities accessible to learners around the world (minus a few countries on the US “we don’t like” list).

This leaves assessment as the last leg of economic value. The badges and competency-based learning movement may challenge assessment, but at this point it remains reasonably secure.

What will universities do in the future to monetize their value? I offer the image below – instead of monetizing learning, content, and teaching, universities in the future will monetize assessment and the process of filling learner knowledge gaps. Content is largely free/open. Teaching is becoming more free/open. If something can be duplicated with only limited additional expense, it cannot serve as a value point for higher education. Creating personalized and adaptive learning processes that account for the personal knowledge graph of a learner is, and likely will continue, to be a source of value economically for universities.

University of Texas at Arlington

This is likely not news to most readers as it has been posted in various blogs, forums, and announced at the MOOC Research conference in December, but I have applied, and received approval, for a leave of absence from Athabasca University to establish and set up a digital learning research lab at University of Texas at Arlington. I will be based in Arlington, but will continue to work with my AU doctoral students.

My research to date has focused on the social and technological learning, sensemaking and wayfinding activities of individuals in digital information environments and how these actions inform the design of learning, curriculum and ultimately institutions. At the core of this research is how people interact with information. When information is limited, it can be assessed and understood individually or through social interactions with peers. When information is abundant, technology extends human cognition and capacity for sensemaking. How people use technology and social methods to make the world sensible, and the types of knowledge institutions required to assist that process, is what we hope to address through the Learning Innovation & Networked Knowledge (LINK) Research Lab.

A key second goal at UTA will be the development of a digital learning research network. Just like local-only classrooms no longer make sense, research institutions that work only within a small local domain don’t make sense. I’m particularly interested in understanding how we can connect excellent research with practical implementation. More is known about quality learning in literature than is reflected in classrooms and online courses. The digital learning research network is expected to bring those two domains together.