Most educational software instantiates physical learning spaces. This is reflected in learning management systems, virtual classrooms, and interactive whiteboards. Essentially, we use new tools to do the work of old tools and largely fail, at first, to identify and advance the unique affordances of new technology.
The internet fragments information and antagonizes pre-established information structures. Albums, books, and courses, for example, have a hard time existing as coherent wholes in a network. When individuals have access to tools for creating, improving, evaluating, and sharing content, centralized structures fail. This has been a core argument that Stephen and I have been making since our first open online course, CCK08.
Early on in CCK08, we discovered that central discussion forums and learning content were augmented, even replaced, by distributed interactions. Instead of creating central spaces of learning, our focus (and reflected in Stephen’s grsshopper software) in subsequent courses turned to encouraging students to own their own learning spaces. The course, as a result, became more about aggregating distributed interactions than about forcing learners into our spaces. A Domain of One’s Own is another great example of promoting learner identity self-management and learner’s ownership of space.
The challenge with fragmentation is that learning itself is a coherence forming process. Even when we get information from a variety of sources, we still go through a process of putting these concepts in relation to others. This process is not unlike how we recognize a person’s face: many regions of the brain are involved and the process of “binding” distributed processes is what generates recognition. If connections don’t form, learning doesn’t happen and knowledge isn’t generated.
Educational software in common use today assumes that structure exists a priori. This structure might in the form of a textbook, course content, or a series of lectures. Learners are then expected to duplicate the knowledge of the instructor (hence the notion of knowledge transfer). This mindset is an artifact of physical spaces of learning. When teaching happened only in classrooms, students had to be brought together into a set physical space. It wasn’t practical, or cost effective, for learners to cut up textbooks into individual images and small text elements and encourage learners to remix them with other text and resources.
Physical space and physical structure of information determined suitable pedagogies.
The limitations of physical space have diminished. Information is generally in digital form now, even in traditional classrooms. Contrived structures of coherence are no longer needed in advance of learner engagement with content. Instead, something along the lines of Wolfram’s notion of computational knowlege or schema on read seems more sensible today.
I’ll take it a few steps further: in the near future, all learning will be boundary-less. All learning content will be computational, not contrived or prestructured. All learning will be granular, with coherence formed by individual learners. Contrived systems, such as teaching, curriculum, content, accreditation, will be replaced, or at minimum, by models based on complexity and emergence (with a bit of chaos thrown in for good measure). Perhaps it will be something like, and excuse the cheesy name, learnometer. Technical systems will become another node in our overall cognitive systems. Call it embodied cognition. Or distributed cognition. Or appeal to Latour’s emphasis that technical nodes in knowledge system can be non-human and actually be seen as equal to human nodes. I’ve used the term connectivism to describe this. Others have emphasized networked knowledge and combinatorial creativity.
The terminology doesn’t really matter.
The big idea is that learning and knowledge are networked, not sequential and hierarchical. Systems that foster learning, especially in periods of complexity and continual changes to the human knowledge base, must be aligned with this networked model. In the short term, hierarchical and structured models may still succeed. In the long term, and I’m thinking in terms of a decade or so, learning systems must be modelled on the attributes of networked information, reflect end user control, take advantage of connective/collective social activity, treat technical systems as co-sensemaking agents to human cognition, make use of data in automated and guided decision making, and serve the creative and innovation needs of a society (actually, human race) facing big problems.
Over the last several years, the challenge of creating a learning system that reflects the attributes of networked information and enables heightened creativity, has been a growing personal research interest. Last year, a colleague from Athabasca University, Dragan Gašević, introduced me to a research project that he was leading that I found addressed many of the shortcomings in learning systems today. We joined forces. Together, with another colleague, Shane Dawson and programmers Nikola Milikić and Zoran Jeremić, we’ve been working on what we feel is a learning system (educational software) that represents the type of learning needed by individuals and organizations today.
We have run pilots with the software and have a few additional pilots planned for fall. We want to move beyond closed, course-based pilots and engage in messier and sloppier learning experiences. To this end, we are offering an open online course starting November 1, 2013 on learning analytics. If you are interested in joining the course, please register here. We are looking for feedback on the system itself – what’s the end user experience like for people that didn’t design it? What makes sense? What do you want the software to do? What is missing?