Get the complete book Thinking Strategically about MOOCs: The Role of Massive Open Online Courses in the College and University at Amazon in print or kindle version.
This chapter is broken into three sections that explore three closely related topics that are not always associated with MOOCs but should be: big data, learning analytics, and adaptive learning. Each of these topics has implications for higher education policy and programs, thus meriting critical review by campus leadership. MOOCs that integrate adaptive learning tools based on big data and learning analytics will likely extend the benefits of these systems, but will also amplify questions and issues related to big data and analytics.
* * *
1. Big Data
Big Data is at the heart of modern science and business.
~ Francis X. Diebold, University of Pennsylvania
MOOCs and other online education spaces generate tremendous amounts of data about student behavior, learning styles, and interactions with course material, teachers, and other students. MOOCs also provide data about time spent on particular assignments and engagement in general with the MOOC environment. We can know when students are online and offline, and for how long. Ostensibly, analysis of this data can teach us a great deal about learning. This possibility has people trumpeting Big Data as the next big thing.
Big data services are already all around us. Google and Amazon collect and analyze tremendous amounts of data on their users and customers. Government agencies use big data and analytics to identify patterns of behavior. In general, government and business expect Big Data to help drive decision-making with data and analysis rather than intuition and experience.
With the advent of online learning, higher education is now following the lead of government agencies and for-profit corporations by dipping its toe in the big-data waters. In 2011, Ganesan (Ravi) Ravishanker, Chief Information Officer at Wellesley College, published “Doing Academic Analytics Right: Intelligent Answers to Simple Questions,” in the ECAR Research Bulletin, published by Educause. Ravishanker argues that “data-driven decision-making is ever more essential.” He goes on to say that institutions will do well to encourage systemic interaction with data reports as part of the process of ensuring a return on their investment, and that applying data analytics to institutional learning environments is an opportunity missed on most campuses. With respect to learning management systems and the volumes of interpretable data they represent, Ravishanker encourages campus leaders to question “student and faculty access patterns, how many artifacts are associated with a course, how are students and faculty using the system.”
Similarly, in 2012, IBM and Campus Technology published “Building a Smarter Campus: How Analytics is Changing the Academic Landscape,” reporting that higher ed institutions are increasingly recording the events, activities and assignments of their students. Implementing tools to analyze that data will give decision-makers the ability to predict learning outcomes and better attend to individual needs: “As the amount of data in higher education is increasing exponentially, data analytics is fast becoming the process-of-choice for colleges and universities that want to improve student learning and campus operations. By turning masses of data into useful and actionable intelligence, higher education institutions are creating smarter campuses—for now and for the future.”
Educause, in its 2012 “Study of Analytics in Higher Education,” defines analytics as the “use of data, statistical analysis, and explanatory and predictive models to gain insights and act on complex issues.” Leaders in higher education are increasingly aware of analytical tools at their disposal. “Predictive tools” help analyze what has happened in a given scenario in order to understand what is likely to happen in the next one; “prescriptive tools” then provide recommendations on how best to respond. Such analytics display patterns in student-generated data and project potential outcomes, allowing for informed decisions based on solid projections rather than on intuition.
MOOCs—and even “traditional” local classroom learning systems—produce massive amounts of data that we may well want to analyze with these tools, in the hopes of improving learning outcomes. Online learning systems, aggregations of data, and the availability of analytics could be converging to rewire the teaching-and-learning circuitry. That potential is driving new vendors to target higher education. A brief review of some emerging players and their products highlights the scope of this emerging academic support industry.
2. Learning Analytics: Three important companies you should know
Apollo Group and Carnegie Learning. In August 2011, the Apollo Group, which runs the University of Phoenix, bought Carnegie Learning, which develops interactive adaptive learning software for math instruction, for $75 million. “Founded by cognitive and computer scientists from Carnegie Mellon University in conjunction with veteran mathematics teachers,” Carnegie’s website declares, “Carnegie Learning has the courage to not only question the traditional way of teaching math, but re-invent it.”
Carnegie Learning entered the higher education market in 2007 after working primarily in middle school and secondary school markets. The company thus has a deep reservoir of content and a decade of experience in developing adaptive learning systems. With its acquisition, Apollo further extends its personalized instruction platform to a broader post-secondary student audience. Apollo hired Mike White away from Yahoo to serve as the Chief Technology Officer for the re-organized Carnegie Learning and assigned more than one hundred technologists to the project. According to White, Apollo sees “adaptive learning as the future. It is about individual learning outcomes.”
Pearson and Learning Catalytics. In Spring 2013, Pearson, which has spent more than $1 billion on education companies since 2011 in an effort to extend its reach beyond textbooks and other publications, acquired Learning Catalytics, a learning assessment system created by Harvard University educators Eric Mazur, Brian Lukoff, and Gary King.
The Learning Catalytics system grew out of Mazur’s persistent efforts to perfect interactive teaching. That effort is documented in Peer Instruction: A User’s Manual, which outlines the Peer Instruction method he began developing in the early 1990s and which helped fuel the development and adoption of “classroom clicker” technology. Learning Catalytics’ cloud-based software system builds on the clicker model to mine data so as to better “engage students by creating open-ended questions that ask for numerical, algebraic, textual, or graphical responses—or just plain multiple-choice,” in the words of the company’s website. Instructors use data from the system to send peer interaction directions to students. “Students use any modern web-enabled device they already have—laptop, smartphone, or tablet,” and the system mines data generated by their responses to open-ended questions to direct them to peers for interaction and debate.
In “Colleges Mine Data to Tailor Students’ Experience” (The Chronicle of Higher Education, December 11, 2011), Marc Parry describes how the system is used to direct peer instruction activities in Brian Lukoff’s Harvard calculus class: “The software records Ben Falloon’s location in the back row and how he answers each practice problem. Come discussion time, it tries to stir up debate by matching students who gave different responses to the most recent question. For Mr. Falloon, the system spits out this prompt: Please discuss your response with Alexis Smith (in front of you) and Emily Kraemer (to your left).”
Instructors receive graphical displays of the responses, recommendations, and results of the interactions. “Advised by the system to interact, they engage in a debate which is the point which gets them arguing—exactly what the matchmaking algorithm intended. Meanwhile, Mr. Lukoff’s screen displays a map of how everyone answered the question, data he can use to eavesdrop on specific conversations.”
Skeptics, who believe that simply monitoring and cataloging data responses to classroom questions minimizes or even eliminates creativity in the learning environment, consider the modifying of college-level teaching and learning through the use of analytics akin to employing standardized testing in primary and secondary education—with predictable and similar results. Mazur counters that learning analytics systems solve three problems faced by faculty in the contemporary classroom: “One, it selects student discussion groups. Two, it helps instructors manage the pace of classes by automatically figuring out how long to leave questions open so the vast majority of students will have enough time. And three, it pushes beyond the multiple-choice problems typically used with clickers, inviting students to submit open-ended responses, like sketching a function with a mouse or with their finger on the screen of an iPad.”
Michael Horner, co-founder and executive director of the Christensen Institute, argues that the ability to harness data generated by analytics frees instructors to focus on working directly with students. Ravishanker also notes in his paper that capturing data about learning activities both in class and online provides faculty and provosts a wealth of information to help evaluate systems currently in place. Horner and others go farther, arguing that analytics will help higher education move away from a factory model of education toward a learning-focused model.
As corporate entities like Pearson partner with the academy, it will be important to review these developments with a critical eye on how they cohere with the strategic needs of your institution. It will be difficult to dismiss out of hand resources that make your own data so readily available, possibly enabling informed development of learning platforms that make sense for this century.
Desire2Learn. Founded in 1999 by John Baker, Desire2Learn provides cloud-based learning management systems for higher education. In 2012, Desire2Learn entered the learning analytics arms race in earnest with $80 million in financing from New Enterprise Associates and OMERS Ventures. According to an NEA press release, the company is focused on “transforming the way the world learns in a rapidly growing market fueled by the adoption of online and mobile learning tools, digital textbook distribution, and advanced learning analytics.”
Desire2Learn has been building toward adding learning analytics to its platform for some time, having developed a team to build a framework of algorithms and predictive models to analyze student learning. The team developed a “risk quadrant” that provides weekly predictive representations of individual learners’ progress in a given course. Quadrants display students who are fully engaged and on track towards passing; students who are less engaged but still on track; students who are in danger of withdrawing from the course; and students who are in danger of failing or receiving a poor grade.
The tool begins making data-driven predictions on the first day of a course. Interviewed by Ellis Booker for Information Week (“Can Big Data Analytics Boost Graduation Rates?” February 5, 2013), Baker brashly described how dynamic student data, meshed with available historical course data, provides a data framework allowing the system to make predictions of student learning performance with 95 percent accuracy as early as weeks two and three. Desire2Learn launched the learning analytics product as part of an integrated suite of resources called Desire2Learn Insights, which the company says can deliver high-performance reports, data visualizations, and predictive analytics to help institutions measure the success of their overall learning environment.
Desire2Learn has taken the analytics toolkit for the classroom that providers like Learning Catalytics have deployed, and implemented it in a cloud-based, intentionally online learning platform wrapped within a familiar learning management system. This has interesting implications for campuses making decisions about MOOCs and other forms of online education.
3. Adaptive Learning
In his article “A History of Teaching Machines” (American Psychologist, September, 1988), Ludy T. Benjamin traced the pedigree and legacy of teaching machines in the U.S: “By the early 1960s, teaching machines were much in the news. National and international conferences were held to discuss the new technology, and popular magazines and scientific journals published news of the emerging research and applications.”
Online learning and MOOCs, big data, and analytics, have re-energized a long-standing educational initiative—technology-mediated teaching. Adaptive learning, building on big data and learning analytics, is the latest iteration. Modern adaptive learning includes the implementation of data-driven analytics to help faculty shape the delivery of course materials to adapt to individual abilities. These tools offer personalized learning, mediated by technology. In his essay “Adaptive Learning Could Reshape Higher Ed Instruction” (April 4, 2013, Inside Higher Ed), Peter Stokes, executive director of postsecondary innovation in the College of Professional Studies at Northeastern University, describes adaptive learning as “an environment where technology and brain science collaborate with big data to carve out customized pathways through curriculums for individual learners and free up teachers to devote their energies in more productive and scalable ways.” (Stokes is also a contributor to the report “LEARNING TO ADAPT: A Case for Accelerating Adaptive Learning in Higher Education,” funded by the Bill and Melinda Gates Foundation.)
There is a long history of teaching machines—mechanical, multimedia, and computers—extending back to an 1809 patent for an educational appliance for the teaching of reading. By 1936, there were nearly seven hundred patents for teaching devices. The history of these devices can be traced from the original patented machines of the nineteenth century through the teaching/testing devices of Sidney Pressey in the 1920s to the more sophisticated teaching machines of Harvard psychologist B.F. Skinner in the 1950s.
Where initial devices were more about testing, late-twentieth-century efforts focused on teaching that enables students to adapt to machine-provided feedback. B.F. Skinner created a mechanical “teaching machine” in the mid-1950s that broke learning into sequenced steps and allowed students to pace themselves as they worked through a series of questions. The steps resembled processes that tutors use to engage students and guide them, via feedback, toward increasingly accurate responses and new knowledge. The machine posed questions and offered new questions only when the student answered correctly; an incorrect answer caused the machine to repeat the question. Skinner’s efforts eventually fell out of favor in part because few companies were willing to invest in designing and developing materials for a product with an indeterminate future, but interest in adaptive learning persisted through the latter half of the century with the emergence of affordable personal computers.
Contemporary instructional designers adhere to Skinner’s basic tenets, offering adaptive learning tools that present course materials to students who do not move on to subsequent questions until their performance, based on data generated in the adaptive learning process, indicates competency and knowledge. Adaptive learning combines individualized instruction (or rather, something that feels like it to the student), peer interaction, effective and engaging simulations, and applications that dynamically adapt to the learner’s abilities.
Because MOOCs and related online and software-mediated learning environments leverage earlier adaptive-learning techniques, campus leadership should consider the value in making historic overview part of their consideration of MOOCs. They might also also look for ways to engage private-sector partners in their strategic thinking, particularly since corporate startups are eager to partner with colleges and universities in developing adaptive learning products. In April 2013, Rice University held the first annual Workshop On Personalized Learning, bringing together leaders from higher education and adaptive-learning startup companies to “plot a course to the future of personalized learning” (http://rdls.rice.edu/personalized-learning-workshop). Participants were invited to explore the potential of big data to ensure time and cost efficiencies in the delivery of learning outcomes. Presenters included researchers from Knewton, Carnegie Learning, and Khan Academy as well as faculty and researchers from MIT, Arizona State University, and Duke University.
Meanwhile, the Bill and Melinda Gates Foundation is investing heavily in adaptive learning. The foundation has solicited proposals from colleges and universities for ten $100,000 grants to help develop new partnerships implementing adaptive learning courses. To encourage participation, the foundation hosted a March 2013 webinar outlining the details of their Adaptive Learning Market Acceleration Program. The session showcased new research indicating that “intelligent” (meaning “digital”) tutors are nearly as effective as humans, citing related research by educational psychologist Benjamin Bloom. In 1984, in the journal Educational Researcher, Bloom reported that students tutored one-to-one performed two standard deviations better than students taught in conventional lecture courses. Commercial MOOC providers have also touted Bloom’s research:
The foundation’s strategy is to invest in “market change drivers” that include exemplary implementations of adaptive learning courses combined with research and analysis of learning outcomes in order to accelerate the adoption of adaptive learning in higher education. The Foundation also has formed a loose coalition of leaders from a dozen colleges and two associations to share information about developing and implementing adaptive learning. These schools and this coalition could become a rich laboratory for partnerships between technology vendors and campuses. Participants in the Gates Foundation group include:
- American Association of State Colleges and Universities
- American Public University System
- Arizona State University
- Association of Public and Land-Grant Universities
- Capella University
- Excelsior College
- Kaplan University
- Kentucky Community and Technical College System
- Rio Salado College
- Southern New Hampshire University
- SUNY Empire State College
- University of California at Berkeley
- University of Texas at Austin
- Western Governors University
The Gates Foundation also commissioned a report from Education Growth Advisors, entitled “Learning To Adapt: A Case for Accelerating Adaptive Learning in Higher Education.” (The group also issued a more comprehensive report entitled “Learning to Adapt: Understanding the Adaptive Learning Supplier Landscape.”) The report outlines the potential of adaptive learning and how it might help address the “Iron Triangle” of cost, access and quality, and describes potential adoption paths, opportunities and barriers, solutions, and case studies. It attempts to detail the capabilities of emerging adaptive learning products in an effort to help college leadership make decisions.
The report simplifies campus review of adaptive learning options through analysis of potential benefits of current and emerging providers and products, and includes very brief case studies from a few universities. The document is not unbiased; it reflects the enthusiasm and vision of the Gates Foundation and those who contributed to the narrative. (As its website states, “Education Growth Advisors is affiliated with Education Growth Partners, a Stamford Connecticut-based private equity firm focused exclusively on growth equity investments in education companies in the preK-12, higher education, corporate training, and lifelong learning sectors. Education Growth Partners invests in profitable, innovative, high-potential companies that are seeking capital and expertise to reach scale.”) The report opens with three congratulatory scenarios about successful experiences in personalized education, each extolling the potential of adaptive learning systems. The first paragraph of the report is unambiguous: “Welcome to the world of adaptive learning—a more personalized, technology-enabled, and data-driven approach to learning that has the potential to deepen student engagement with learning materials, customize students’ pathways through curriculum, and permit instructors to use class time in more focused and productive ways. In this fashion, adaptive learning promises to make a significant contribution to improving retention, measuring student learning, aiding the achievement of better outcomes, and improving pedagogy.”
One hears echoes of B.F. Skinner and other, earlier, proponents of now-antiquated “teaching machines” in the report’s descriptions of students interacting with course materials in a digital environment. Similar to those earlier claims, we are promised that when “students answer particular questions incorrectly, they may be directed back to appropriate points in the materials to better acquaint themselves with the relevant concepts or facts.” It would be difficult to fault the reader for concluding that this is simply the latest iteration of the same old story. It might be tempting to dismiss such reports (and many do) as vehicles for corporate expansion into new and profitable markets. We have survived several generations of enthusiasts and profiteers working to develop technology-mediated education products; it is tempting to say that this one will soon burn itself out as well.
But this time may very well be different. Unlike the earlier mechanical contraptions, which were isolated (and isolating), everyone now has a “teaching machine” in his or her shirt pocket. We are interconnected and interactive in ways not possible before. We have become predisposed to the use of ubiquitous technologies that mediate our information and communication exchanges. On the policy side, there is focused corporate and governmental pressure to innovate in the interest of increasing access to education and improving graduation rates. Foundations are investing in collaborative programs and families are ready for educational alternatives that do not saddle them with life-long debt.
Carnegie Mellon Online Learning Initiative (OLI)
More than a decade ago, the late Herbert Simon, Nobel Laureate and professor at Carnegie Mellon University, stated, “Improvement in postsecondary education will require converting teaching from a solo sport to a community-based research activity.” Simon’s emphatic stance has informed the development and implementation of Carnegie Mellon’s Open Learning Initiative (OLI), which began as an effort to integrate digital cognitive tutors and standalone online courses. Beginning in 2002, with funding from the William and Flora Hewlett Foundation, the OLI embarked on a program to develop an online curriculum for “anyone who wants to learn or teach” (Source: OLI website, http://oli.cmu.edu).
In his article, “The Real Precipice,” Richard Holmgren, Vice President for Information Services and Planning at Allegheny College, cautions that the “real threat to traditional higher education embraces a more radical vision that removes faculty from the organizational center and uses cognitive science to organize the learning around the learner. Such models exist now.” Holmgren goes on to describe how the OLI uses a team approach that includes cognitive scientists, instructional designers, technologists, and faculty disciplinary specialists to design interactive online courses. For a decade or more, the work of the OLI has embodied Peter Stokes’ definition of adaptive learning as “an environment where technology and brain science collaborate with big data to carve out customized pathways through curriculums for individual learners.” Further, the OLI process is an example of learning analytics at work, as researchers use the data generated from course interactions for persistent course evaluation and re-development as articulated in the goals of the program. The goals of the Carnegie Online Learning Initiative include:
- Support better learning and instruction with scientifically based, classroom-tested online courses and materials.
- Share courses and materials openly and freely so that anyone can learn. OLI courses are used by institutions to supplement classroom instruction. They are also designed to support individual independent learners.
- Develop a community of use, research, and development to allow for the continuous evaluation, improvement, and growth of courses and course materials.
(Source: OLI website, http://oli.cmu.edu)
Holmgren notes that although the OLI is a “proof-of-concept endeavor,” it has made compelling advances in blending cognitive science, machine learning, and instructional design. These advances are documented in research from Ithaka S+R that compared face-to-face learning to the hybrid courses rooted in the OLI model. Holmgren notes that according to the study “hybrid courses were at least as effective in promoting student understanding of statistics as traditional courses. Further, students in the hybrid courses learned as much even though they spent significantly less time in learning activities, which echoes earlier work by OLI showing that Carnegie Mellon students learned statistics with OLI in half the time that students in traditional courses did.”
Carnegie Mellon has enjoyed a long history of successful integration of cognitive science and technology into learning environments. Prominent among those efforts is the for-profit Carnegie Learning, recently purchased by the Apollo Group. Carnegie Learning’s development of digital cognitive tutors to assess students’ knowledge and competency and provide a curriculum tailored to individual skill levels was instrumental in crafting the OLI. Embedded cognitive tutors, interactive engagements, and immediate feedback are fundamental components of the OLI. Anya Kamenetz describes the assessment and feedback experience of a learner using the OLI as “what might happen in a classroom under ideal circumstances, with a teacher of infinite patience, undivided attention, and inexhaustible resources of examples and hints” (Kamenetz, Anya, DIY U: Edupunks, Edupreneurs, and the Coming Transformation of Higher Education, New York: Chelsea Green, 2010, 91).
As Holmgren notes, the findings of the Ithaka S+R study are a bit of a milestone. “We can howl in protest, but the question is no longer whether computer-based, intelligent agents can prompt learning of some material at least as well as instructor-focused courses. The question is whether the computer-based version can become even more effective than traditional models, and the implications for higher education are sobering.”
In addition to demonstrable learning outcomes, the assessment and feedback model helps with one of the other distinctive aspects of the OLI: the process is in a persistent state of iterative research, design, assessment, and re-deployment. Cognitive science and instructional design inform initial course development and production, and aggregated data from intentional feedback loops informs subsequent iterations.
In the foreword to Unlocking The Gates: How And Why Leading Universities Are Opening Up Access To Their Courses (Taylor Walsh and Ithaka S+R, Princeton University Press, 2011), William G. Bowen describes the benefits and costs of Carnegie Mellon’s Online Learning Initiative. Bowen writes that “The OLI is exciting precisely because it may offer the possibility of achieving real productivity gains by substituting well-designed online instruction for the labor-intensive ways in which we still teach many basic courses, including some that lend themselves to less labor-intensive teaching methods.” Bowen argues that, despite the costs of programs like the OLI, in the current economic environment “we just can‘t afford to continue doing business as usual. We have to find ways to do more with less. Resources saved in this way could be redeployed to teach more students or, conceivably, to teach advanced students more effectively.” In his view, the OLI has potential in large part because it “lends itself to standard statistical assessments of outcomes—of what was achieved, and at what cost.”
The cost is significant. As noted in the Ithaka S+R study, “a new OLI course currently costs about $500,000 to develop—and that figure represents a decline over time, as some of the earlier courses cost over $1 million each.”
With significant overhead involved in developing even a single course in this model, it is not surprising that so few courses have been completed. However, the cost does not mean lack of utility or viability. Because Carnegie Mellon is committed to the Open Education Resources movement, all of the OLI courses are available and are being used by educators and students all over the world. According to the Ithaka S+R study, between 2006 and 2010 there were 18,516 student registrations for the Academic Version and—significantly—73,062 registrations for the open (and free) courses representing global use in 214 countries. Although it would be financially impossible for a lone institution to adopt and sustain the grant-funded OLI methodology to develop and deliver adaptive learning courses, it is conceivable that the model has potential in the distributed, unbundled model of MOOCs.
The “New Way College” Model. In his “Precipice” article in Inside Higher Ed, Holmgren offers a hypothetical scenario for just how an alternative model integrating the OLI, existing universities, and corporate interests might work. In this scenario, a university that is already committed to, and active in, online competency-based credentialing would partner with a provider similar to the OLI to create a fictional online college (call it “New Way College”) within the larger host institution. New Way College provides a basic curriculum of hybrid courses with no more than twenty students per course, enabling students to earn an associate degree at significant cost savings.
Holmgren’s financial model for New Way College is compelling: Students pay $400 for a four-credit course, and sixteen courses are required for an associate degree. Student cost for the degree would be $6,400, and students qualifying for the maximum Pell grant could have their degree fully underwritten by the federal government. For local organizations, suppose New Way partnered with a public library to offer a course for 15 students, and that the course was staffed by a volunteer from a literacy program. In this example, the public library collects $1800, which would be a boon for local libraries beset by budget cuts, while the curriculum provider and the host university each collect $600, and the testing/proctor firm would receive $3000. Considered at the scale of the university and corporate partners, even a modest program of 30 courses with average enrollment of 10,000 students would generate $84 million to be divided among the curriculum provider, host university, and testing service. Another $36 million would be split among local hosts.
“By unbundling the learning experience—separating local support, course design, delivery, assessment, administrative support, and advising—the NWC model achieves superior outcomes at lower cost, at least when outcomes are measured by exam or other task performance. Local organization and student support is provided by entities with deep roots in their communities, missions aligned with the educational endeavor, existing meeting spaces that are often underutilized and could readily be used to house weekly class meetings, access to volunteer or relatively low-cost tutors to provide student support, and budget constraints that create incentives to leverage these resources to market and support classes for their communities,” Holmgren writes. Such a model could well revitalize local host organizations looking for renewed revenue in harsh economic times. Such disruptive innovation could also wreak financial havoc with local colleges competing in the same market in more traditional fashion.
It is easy to imagine organizations like the Saylor Foundation and StraighterLine developing such a model. A surplus of post-doc Ph.Ds with no realistic hope of securing a traditional tenure-track position provide an abundant labor force for local host organizations. The enthusiasm and prior investment of venture capital players in this market suggests that such a model may well be highly attractive to them.
There are real limitations to the OLI model, including significant overhead cost. Further, the model seems to be restricted to courses like mathematics and statistics, the humanities being completely unrepresented to date. These limitations notwithstanding, the extensibility of the program shows promise; the OLI’s iterative model of persistent research, design, and development making use of data generated by student interaction with material will likely be integrated into, and influence, MOOCs and other online platforms. Such integration, in fact, is already under way.
Platforms and Publishers: delivering on adaptive learning. The report “Learning To Adapt” makes an important distinction between adaptive learning “platforms” and course content “publishers.” To date, campuses have generally had choices between vendors offering platforms with adaptive learning authoring tools and publishers providing course content with delivery models that try to incorporate adaptive learning.
Platform providers sell infrastructure and software for developing adaptive learning models. Examples include aNewSpring, Cerego, CogBooks, Knewton, LoudCloud, and Smart Sparrow. Publishers active in this market include traditional firms eager to capitalize on emerging commercial opportunities—such companies as Cengage, Jones & Bartlett Learning, Macmillan, McGraw-Hill, Pearson, and Wiley. Emerging digital-only publishing include Adapt Courseware and the Open Learning Initiative.
For now, the most successful providers working in the adaptive learning market may be those traditional publishers with the wherewithal to leverage existing content in new ways while negotiating productive partnerships with emerging platform providers. These partnerships are worth monitoring as they provide vendors a powerful vehicle to sell innovative educational resources to higher education and to insert themselves into the dialogue about the future of educational content. As Mitchel Stevens of Stanford University has noted, who gets a seat at that table is still up for grabs. Campus leaders would do well to shoulder their way in and not wait for an invitation to help shape the future.
Of new partnerships that have emerged, that between publisher Pearson and platform provider Knewton appears to have gained the most significant traction. Pearson is busily amassing a substantial portfolio of education companies through both purchases and partnerships. In addition to acquiring Learning Catalytics, for example, they have partnered with a rising startup called Knewton.
Founded by former Kaplan executive Jose Ferreira, Knewton is an adaptive learning infrastructure platform provider. Expanding on earlier successes of efforts like Carnegie Mellon’s Online Learning Initiative, the Knewton infrastructure “makes it possible for anyone to build the world’s most powerful adaptive learning applications. Knewton technology consolidates data science, statistics, psychometrics, content graphing, machine learning, tagging, and infrastructure in one place in order to enable personalization at massive scale” (Source: http://www.knewton.com).
Knewton builds on the past decade of work in big data and learning analytics. Ferreira is adamant about the power of mining big data and marrying analytics to digital course content in order to turn the traditional classroom on its head, thereby freeing instructors to manage their time in new ways. Ferreira and his team are aggressively pursuing the potential of big data and analytics. In an interview with Marc Parry of the Chronicle of Higher Ed (“A Conversation With 2 Developers of Personalized-Learning Software,” The Chronicle of Higher Education, July 18, 2012), Ferreira noted the disparity in the scope of data available from services like Google and the data available from a student engaging with digital course materials: “You do a search for Google; Google gets about 10 data points. They get, by our standards, a very small amount of data compared to what we get per user per day. If they can produce that kind of personalization and that kind of business, based off the small amount of data they get, imagine what we can do in education,” he says. Ferreira and the Knewton team have developed a platform to extract a great deal of data from the user experience: “Knewton’s capturing in the hundreds of thousands of data per user per day. We’re capturing what you’re getting right, what you’re getting wrong, what answers you’re falling for if you get something wrong, what concepts are in that answer choice that you’re falling for. We’re also capturing when you log into the system; how much you do; what tasks you do; what you don’t do; what was recommended that you do that you didn’t do, and vice versa.”
All of this data extraction results in predictions about learning outcomes followed by prescriptions for follow-up actions for each student. Applying the system’s learning analytics to the data generated by student interactions leads to “the perfect sentence, or perfect clip, or perfect problem for you at any one time, based on what you’re the weakest at, and what’s most important, and how you learn it best.” As part of this effort, Knewton is launching what it calls “learning modality adaptivity,” a feature that will discern what and how much to show each student each day. The module is intended to understand how students learn best and when to present content appropriate to learning abilities and demonstrated progress. According to Ferreira, his system will “figure out things like, you learn math best with a video clip, or you learn science best with games instead of text, or in addition to text—and we can figure out what the optimal ratio is for you. We can figure out things like, you learn math best in the morning, and verbal concepts best in the evening, on average. Maybe you learn math best between 8:32 and 9:14. If so, we’ll know it. It means when you show up in the morning to do some practice, we’re going to try to feed you math, and if you show up in the evening, we’ll try to feed you more verbal, because that’s when you’re most receptive to those subject matters.”
Adaptive learning systems do not stop with individual learners. The power of such programs lies in the interconnectivity of all data streams of all students in the course. Predictions and prescriptive actions are personalized for each student, with the learning trajectories of different students divergent by design. The system decides which course modules will be presented to each student, and when, based on data mining and analytics. In this regard, Knewton is representative of the marriage of big data, learning analytics, and personalized adaptive learning that fosters such potentially disruptive models as self-paced learning and flipped classrooms. By measuring productivity and progress, Knewton’s and other adaptive learning systems will recommend different times of the day for different students to “crack the book.” This blows up the standardized classroom model governed by the calendar and the clock, and gestures toward a hybridized, self-paced learning model.
Adaptive learning models may encourage consideration of alternatives to credentialing that are currently based on seat time and the credit hour. They may be attractive to institutions eager for solutions to problems of access to college, cost containment, and degree completion rates. Emerging partnerships and collaborations between corporations and startups that likely would have been competitors just a few years ago may fuel additional disruptive (or distracting) models.
Knewton’s partnership with Pearson allows it to leverage pre-existing contractual relationships with higher education institutions to deliver course content in a new way. The digital content of every Pearson textbook must now be “tagged” with metadata that powers the Knewton analytics system. Pearson can now use the Knewton model to re-power existing online reading and mathematics courses. With significant shares of the higher education textbook and digital book markets, the deal gives Knewton a boost as it markets to institutions. Perhaps most significantly, Knewton now has access to pre-existing student data captured in the Pearson machine. The infusion of this comparative data will increase the accuracy of the Knewton system.
All of this is relevant to the consideration of MOOCs and online learning vendors. Daphne Koller, co-founder of Coursera, speaks of the analytical strengths of MOOCs, extolling their adaptive pedagogy. “We can now do the kind of rapid evolution in education that is common at companies like Google, which ‘A/B test’ their ad positions and user interface elements for effectiveness,” she has said. “These websites evolve in a matter of days or weeks rather than years” (Booker, Ellis. “Can Big Data Analytics Boost Graduation Rates?” Information Week, February 5, 2013).
Many will object to the use of data mining, learning analytics, and other methodologies that inform website advertising in the development of academic learning environments. Even some MOOC proprietors are dubious. Mike Feerick, CEO of Advance Learning Interactive Systems Online (ALISON), which provides interactive multimedia courseware for certification and standards-based learning, acknowledges the importance of data analytics while also asserting that expecting such tools to solve education’s problems is simply wrong. Feerick is just as adamant that talented and dedicated teachers who make effective use of these emerging tools are the key to pedagogical success as Koller and Ferreira are about the promise of data mining and analytics.
Media coverage of this issue highlights the technologies that these adaptive learning entrepreneurs promote, leading many to assume that they seek to supplant educators with software. Along with the persistent drumbeat coming out of these startups, the coverage inspires observations from scholars like Evgeny Morozov, who describes misplaced faith in technology as a “dangerous ideology.” In his article “The Perils of Perfection” (New York Times, March 2, 2013), Morozov labels this ideology “solutionism: an intellectual pathology that recognizes problems as problems based on just one criterion: whether they are ‘solvable’ with a nice and clean technological solution at our disposal.” He explains his ideas more fully in his book, To Save Everything, Click Here: The Folly of Technological Solutionism. Morozov raises important questions about the expectations our culture holds out for technological solutions to cultural and social problems. He specifically questions the impact of “nice and clean” big data solutions on our ability to negotiate the messy business of living and learning. Morozov argues that, in the effort to cleanse our social and cultural institutions of that messiness—“from education to publishing and from music to transportation” —in the name of mere efficiency is to lose the benefit of the struggle and decision-making that contribute to maturity. Morozov cites Sartre, who “celebrated the anguish of decision as a hallmark of responsibility,” and notes that celebrating the value of such inefficiency and struggle “has no place in Silicon Valley.”
Indeed, there is undeniable value in inefficiency and imperfection. We learn from our mistakes and we do well to foster spaces where we can mess up and gain insights from the process. Proponents of adaptive learning are confident that the digital spaces they are creating are just that—spaces where we learn from our mistakes. At issue for higher education is the extent to which we identify and implement big data and adaptive learning solutions. Morozov’s concerns notwithstanding, these resources will be part of whatever MOOCs and learning management systems colleges and universities put in place. We must ensure that we understand how to make the best use of these tools as supplements to the established value and success of educators. We need to listen to the cautionary tales of those like Morozov to understand the reasonable limits of such potentially invasive technologies. Campus leaders and stakeholders across all sectors of higher education and from all sorts of institutions need to fully understand the implications of these proposed solutions, insert appropriate checks and balances, and ensure continued appreciation for the necessarily awkward messiness of learning.
Ethical Implications of Big Data, Analytics, and Adaptive Learning
In addition to the concerns of Morozov and others about surrendering the benefits of traditional, albeit imperfect, spaces to the sanitized environments of adaptive learning, there are ethical issues regarding the use of data collected via MOOCs and other online learning environments. The expansive dimensions of big data will uncover new obligations on the part of the institution to act in the interest of the student once the institution “knows” and can predict something about that student’s performance. Accumulating and aggregating analyses of big data, by design, results in predictions about performance and may raise privacy concerns. Reviewing such analyses will require decisions regarding appropriate allocation of such institutional resources as faculty, curricular design, and staff support. An institution will need to determine whether there are reporting and/or disclosure issues. Policies may need to be drafted about informing students and faculty about the type, nature, and scope of data being collected about them. Existing policies will need to be reviewed to understand whether they address big data issues.
The May 6, 2013, edition of EDUCAUSE Review included an article by James Willis, John Campbell, and Matthew Pistilli, entitled “Ethics, Big Data, and Analytics: A Model for Application,” which offers an in-depth analysis of the implications of big data in higher education. With respect to emerging big data opportunities and questions, the authors enjoin campus leaders to “understand the dynamic nature of academic success and retention, provide an environment for open dialogue, and develop practices and policies to address these issues.” They outline the ethical issues involved in implementing big data solutions on campus and offer prescriptive guidelines for policy development.
As part of their research, the authors reviewed the outcomes of Purdue University’s Signals project, which uses big data and analytics to detect “early warning signs and provides intervention to students who may not be performing to the best of their abilities before they reach a critical point” (source: http://www.itap.purdue.edu/studio//signals/). The authors conclude that the Signals project has improved retention and graduation rates, illuminating an interesting set of decisions about resource allocation. They credit feedback from the big data component of the Signals project for increasing student success: “Students who are less prepared for college—as measured solely by standardized test score—are retained by and graduated from Purdue at higher rates than their better-prepared peers after having one or more courses in which Signals was used.” Knowing that the mediating impact of data analytics improves academic performance of students who are less well prepared for college raises questions about allocation of resources and the accountability of the institution. “With access to these predictive formulas, faculty members, students, and institutions must confront their responsibilities related to academic success and retention, elevating these key issues from a ‘general awareness’ to a quantified value.” Inserting big data and adaptive learning systems into MOOCs will likely enhance their potential for positive mediation of student learning outcomes. It will also amplify the impact and scope of these issues.
Francis Diebold is the Paul F. and Warren S. Miller Professor of Economics in the School of Arts and Sciences at the University of Pennsylvania. He is also Professor of Finance and Statistics in the Wharton School University of Pennsylvania. Diebold asserts that big data is “not merely taking us to bigger traditional places. Rather, it’s taking us to wildly new places, unimaginable only a short time ago.” If so, various current institutional policies are inadequate. To help with policy review and revision, Willis, Campbell, and Pistilli offer a set of questions that help inform implementation of big data in campus learning systems. They are worth sharing here:
- Does the college inform students that their academic behaviors are tracked?
- What and how much information should be provided to the student?
- What and how much information does the institution give faculty members?
- Does the institution provide a calculated probability of academic success or just a classification of success (e.g., above average, average, below average)?
- What guidelines are provided to faculty regarding the use of the student data?
- Should the faculty member contact students directly?
- Will the data influence perceptions of the student and the grading of assignments?
- What amount of resources should the institution invest in students who are unlikely to succeed in a course?
- What obligation does the student have to seek assistance?
Building on the research and related issues, the authors propose three specific responsibilities institutions must embrace in order to ensure academic success for faculty and students in an era of massive data aggregation in online adaptive learning environments in this new era:
- The institution is responsible for developing, refining, and using the massive amount of data it collects to improve student success and retention, as well as for providing the tools and information necessary to promote student academic success and retention.
- The institution is responsible for providing students and faculty members with the training and support necessary to use the tools in the most effective manner. It further is responsible for providing students with excellent instructional opportunities, student advising, and a supportive learning environment, as well as for providing faculty members with tools that allow them to deliver timely feedback to students on their progress within their courses.
- The institution is responsible for providing a campus climate that is both attractive and engaging and that enhances the likelihood that students will connect with faculty and other students, and for recognizing and rewarding faculty and staff who are committed to student academic success and retention.
Institutions will need to determine their capacity to manage the implications of big data. Commercial entry and expansion into the learning analytics and adaptive learning market is increasing dramatically and is already informing the development of MOOCs. That increase adds to the array of issues senior leadership in higher education must consider as part of the strategic integration of pedagogy and technology in the context of the campus mission.
The authors of “Ethics, Big Data, and Analytics: A Model for Application,” suggest specific questions to address when considering adoption and implementation of big data on campus:
- What is the role of big data in education?
- How can big data enrich the student experience?
- Will the use of big data increase retention?
- To what extent can big data contribute to successful outcomes?
Further, as you consider the implications of learning analytics at your institution, questions to ask include:
- Are you already using data analytics?
- Do you have an organizational culture supporting the use of data analytics for decision-making?
- Does your institution have the organizational adaptability to implement analytics in the culture?
- Does your institution currently have the organizational capacity and skill sets to make good use of data analytics?
- Are you aware of and have you made use of the ECAR Analytics Maturity Index available from Educause?
- Have you reviewed your institution’s strategic plan to identify issues that would benefit from analytics?
- Do you view analytics as strategic investment or additional cost?
Benjamin, Ludy T. “A History of Teaching Machines.” American Psychologist 43, no. 9 (September 1988): 703–712.
“Big Data Defined,” n.d. http://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=299.
Booker, Ellis. “Can Big Data Analytics Boost Graduation Rates?” Information Week, February 5, 2013. http://www.informationweek.com/big-data/news/big-data-analytics/can-big-data-analytics-boost-graduation-rates/240147807.
“Building a Smarter Campus: How Analytics Is Changing the Academic Landscape.” Campus Technology, January 23, 2012.
Diebold, Francis X. “A Personal Perspective on the Origin(s) and Development of ‘Big Data’: The Phenomenon, the Term, and the Discipline.” University of Pennsylvania, November 26, 2012.
Education Growth Advisors. “Learning To Adapt: Understanding The Adaptive Learning Supplier Landscape,” April 2013.
Fain, Paul. “Gates Foundation Helps Colleges Keep Tabs on Adaptive Learning Technology.” Inside Higher Ed, April 4, 2013. http://www.insidehighered.com/news/2013/04/04/gates-foundation-helps-colleges-keep-tabs-adaptive-learning-technology.
Fischman, Josh. “Popular Pearson Courseware Revamps by Offering ‘Adaptive Learning’.” The Chronicle of Higher Education. The Wired Campus, November 1, 2011. http://chronicle.com/blogs/wiredcampus/popular-pearson-tutoring-programs-revamp-by-offering-adaptive-learning/33970.
———. “The Rise of Teaching Machines.” The Chronicle of Higher Education, May 8, 2011, sec. The Digital Campus 2011. http://chronicle.com/article/The-Rise-of-Teaching-Machines/127389/.
Holmgren, Richard. “The Real Precipice: Essay on How Technology and New Ways of Teaching Could Upend Colleges’ Traditional Models.” Inside Higher Ed, April 15, 2013. http://www.insidehighered.com/views/2013/04/15/essay-how-technology-and-new-ways-teaching-could-upend-colleges-traditional-models.
Jarrett, Josh, and Rahim Rajan. “Jumpstarting Adaptive Learning.” Impatient Optimists, March 13, 2013. http://www.impatientoptimists.org/Posts/2013/03/Jumpstarting-Adaptive-Learning.
Keller, Josh. “Apollo to Buy Adaptive-Learning Company for $75-Million.” The Chronicle of Higher Education. The Wired Campus, August 2, 2011. http://chronicle.com/blogs/wiredcampus/apollo-to-buy-adaptive-learning-company-for-75-million/32658.
Kolowich, Steve. “Arizona St. and Knewton’s Grand Experiment with Adaptive Learning.” Inside Higher Ed, January 25, 2013. http://www.insidehighered.com/news/2013/01/25/arizona-st-and-knewtons-grand-experiment-adaptive-learning.
———. “California State U. System Will Expand MOOC Experiment.” The Chronicle of Higher Education. The Wired Campus, April 10, 2013. http://chronicle.com/blogs/wiredcampus/california-state-u-system-will-expand-mooc-experiment/43361?cid=wc&utm_source=wc&utm_medium=en.
Laney, Doug. “3D Data Management: Controlling Data Volume, Velocity, and Variety.” Meta Group, February 6, 2001.
Lohr, Steve. “How Big Data Became So Big – Unboxed.” The New York Times, August 11, 2012, sec. Business Day. http://www.nytimes.com/2012/08/12/business/how-big-data-became-so-big-unboxed.html.
Lopes Harris, Pat. “SJSU/EdX Expansion.” SJSU News, April 10, 2013. http://blogs.sjsu.edu/today/2013/sjsuedx-expansion/.
McLemee, Scott. “Review of Matthew L. Jockers, ‘Macroanalysis: Digital Methods & Literary History’.” Inside Higher Ed, May 1, 2013. http://www.insidehighered.com/views/2013/05/01/review-matthew-l-jockers-macroanalysis-digital-methods-literary-history.
McRae,, Philip. “Rebirth of the Teaching Machine through the Seduction of Data Analytics: This Time It’s Personal.” Philip McRae, Ph.D., April 4, 2013. http://philmcrae.com/2/post/2013/04/rebirth-of-the-teaching-maching-through-the-seduction-of-data-analytics-this-time-its-personal1.html.
New, Jake. “Online-Learning Portal Allows Educators to Create Adaptive Content.” The Chronicle of Higher Education. The Wired Campus, April 17, 2013. http://chronicle.com/blogs/wiredcampus/online-learning-portal-allows-educators-to-create-adaptive-content/43405?cid=wc&utm_source=wc&utm_medium=en.
———. “Pearson Acquires Learning Catalytics, a Cloud-Based Assessment System.” The Chronicle of Higher Education. The Wired Campus, April 22, 2013. http://chronicle.com/blogs/wiredcampus/pearson-acquires-learning-catalytics-a-cloud-based-assessment-system/43543?cid=wc&utm_source=wc&utm_medium=en.
Parry, Marc. “A Conversation With 2 Developers of Personalized-Learning Software.” The Chronicle of Higher Education, July 18, 2012, sec. Technology. http://chronicle.com/article/A-Conversation-With-2/132953/.
———. “Colleges Mine Data to Tailor Students’ Experience.” The Chronicle of Higher Education, December 11, 2011. http://chronicle.com/article/A-Moneyball-Approach-to/130062/.
Ravishanker, Ganesan (Ravi). “Doing Academic Analytics Right: Intelligent Answers to Simple Questions.” ECAR Research Bulletin 2 (2011).
Stokes, Peter. “Adaptive Learning Could Reshape Higher Ed Instruction (essay).” Inside Higher Ed, April 4, 2013. http://www.insidehighered.com/views/2013/04/04/adaptive-learning-could-reshape-higher-ed-instruction-essay#ixzz2PVDvDalb.
“The First Annual Rice University Workshop On Personalized Learning.” Center for Digital Learning and Scholarship (RDLS), April 22, 2013. http://rdls.rice.edu/personalized-learning-workshop.
Tilsley, Alexandra. “Wellesley and Wesleyan Hope MOOCs Will Inform Campus-based Teaching.” Inside Higher Ed, December 6, 2012. http://www.insidehighered.com/news/2012/12/06/wellesley-and-wesleyan-hope-moocs-will-inform-campus-based-teaching.
Walsh, Taylor, and Ithaka S+R. “Unlocking The Gates: How And Why Leading Universities Are Opening Up Access To Their Courses.” PRINCETON UNIVERSITY PRESS, 2011.
Warner, John. “We Don’t Need No Adaptive Learning | Inside Higher Ed.” Inside Higher Ed, April 4, 2013. http://www.insidehighered.com/blogs/just-visiting/we-dont-need-no-adaptive-learning.
Watters, Audrey. “Top Ed-Tech Trends of 2012: Data and Learning Analytics.” Inside Higher Ed, December 20, 2012. http://www.insidehighered.com/blogs/hack-higher-education/top-ed-tech-trends-2012-data-and-learning-analytics.
Willis, James E., John P. Campbell, and Matthew D. Pistilli. “Ethics, Big Data, and Analytics: A Model for Application.” EDUCAUSE Review, May 6, 2013. http://www.educause.edu/ero/article/ethics-big-data-and-analytics-model-application?utm_source=Informz&utm_medium=Email+marketing&utm_campaign=ERO&utm_content=flip.
 Francis X. Diebold, “A Personal Perspective on the Origin(s) and Development of ‘Big Data’: The Phenomenon, the Term, and the Discipline” (University of Pennsylvania, November 26, 2012).
 Ganesan (Ravi) Ravishanker, “Doing Academic Analytics Right: Intelligent Answers to Simple Questions,” ECAR Research Bulletin 2 (2011).
 IBM, “Building a Smarter Campus- How Analytics Is Changing the Academic Landscape” (1105 Media. Education Group, January 23, 2012), http://public.dhe.ibm.com/common/ssi/ecm/en/ytl03072usen/YTL03072USEN.PDF.
 Jacqueline Bichsel, “Analytics in Higher Education: Benefits, Barriers, Progress, and Recommendations” (EDUCAUSE CENTER FOR APPLIED RESEARCH, 2012).
 Josh Keller, “Apollo to Buy Adaptive-Learning Company for $75-Million,” The Chronicle of Higher Education, The Wired Campus, August 2, 2011, http://chronicle.com/blogs/wiredcampus/apollo-to-buy-adaptive-learning-company-for-75-million/32658.
 Marc Parry, “Colleges Mine Data to Tailor Students’ Experience,” The Chronicle of Higher Education, December 11, 2011, http://chronicle.com/article/A-Moneyball-Approach-to/130062/.
 “Desire2Learn Raises $80 Million in Financing Round from NEA and OMERS Ventures,” accessed August 5, 2013, http://www.desire2learn.com/news/2012%2FDesire2Learn-Raises-80-Million-in-Financing-Round-Led-by-NEA-and-OMERS-Ventures%2F.
 Ellis Booker, “Can Big Data Analytics Boost Graduation Rates?,” Information Week, February 5, 2013, http://www.informationweek.com/big-data/news/big-data-analytics/can-big-data-analytics-boost-graduation-rates/240147807.
 Ludy T. Benjamin, “A History of Teaching Machines,” American Psychologist 43, no. 9 (September 1988): 703–712.
 Peter Stokes, “Adaptive Learning Could Reshape Higher Ed Instruction (essay),” Inside Higher Ed, April 4, 2013, http://www.insidehighered.com/views/2013/04/04/adaptive-learning-could-reshape-higher-ed-instruction-essay#ixzz2PVDvDalb.
 ADAM NEWMAN, PETER STOKES, and GATES BRYANT, “LEARNING TO ADAPT: A Case for Accelerating Adaptive Learning in Higher Education” (EDUCATION GROWTH ADVISORS, n.d.).
 Richard Holmgren, “The Real Precipice: Essay on How Technology and New Ways of Teaching Could Upend Colleges’ Traditional Models,” Inside Higher Ed, April 15, 2013, http://www.insidehighered.com/views/2013/04/15/essay-how-technology-and-new-ways-teaching-could-upend-colleges-traditional-models.
 Steve Kolowich, “Arizona St. and Knewton’s Grand Experiment with Adaptive Learning,” Inside Higher Ed, January 25, 2013, http://www.insidehighered.com/news/2013/01/25/arizona-st-and-knewtons-grand-experiment-adaptive-learning.
 Marc Parry, “A Conversation With 2 Developers of Personalized-Learning Software,” The Chronicle of Higher Education, July 18, 2012, sec. Technology, http://chronicle.com/article/A-Conversation-With-2/132953/.
 Booker, “Can Big Data Analytics Boost Graduation Rates?”.
 James E. Willis, John P. Campbell, and Matthew D. Pistilli, “Ethics, Big Data, and Analytics: A Model for Application,” EDUCAUSE Review, May 6, 2013, http://www.educause.edu/ero/article/ethics-big-data-and-analytics-model-application?utm_source=Informz&utm_medium=Email+marketing&utm_campaign=ERO&utm_content=flip.