One of CFM’s goals is to help museums learn from other sectors, and help other sectors learn from us. For that reason, my colleagues and I cultivate a diverse audience for our work, drawing futurists, games designers, technologists, educators, librarians and many others into conversation with museumers. In today’s post Rhea Steele, COO at the organization that accredits P-12 educator preparation programs, offers some thoughts on how artificial intelligence, our tech focus in TrendsWatch 2017, may transform the work of accreditation programs.
When I read the November/December issue of Museum magazine: Museums 2040, I was energized and excited by the not-so-far-fetched opportunities presented to future museums. Reading the Accreditation Spotlight, I recalled a session focused on Artificial Intelligence (AI) at the American Society of Association Executives Technology Conference. Not just the stuff of science fiction, AI takes computing beyond algorithms and human-created code. AI allows computers to learn from experience, adjust “behaviors” (outputs) to match new stimulus (inputs) and perform tasks of increasing complexity. This type of learning is referred to as machine learning. During the session, we discussed how AI can assist with repetitive tasks, quickly parse large data sets, and “multitask” by monitoring and responding to multiple systems and inputs simultaneously. I began to wonder what would happen if we applied machine learning and artificial intelligence (AI) to accreditation activities.
The purpose of accreditation in higher education is to improve program quality and assure public accountability, whereas in the museum space, it is to recognize adherence to the standards in the field. In both cases, accreditation systems are dependent on clearly defined rubrics to help volunteer peer reviewers make judgments on the institution’s demonstration of meeting standards. At my organization, the Council for the Accreditation of Educator Preparation, as well as at other accreditors, our process is built on is the evaluative judgments of volunteers who are applying our guidelines and decision criteria. A significant amount of time, effort, and funding goes into ensuring volunteers are sufficiently trained and able to interpret evidence both broadly and within institutional context. As you can imagine, the quality assurance processes, including establishing interrater reliability, result in the need for many, many volunteers and a well-informed, detail-oriented staff.
There is general consensus that the process of accreditation is best conducted by volunteer peer reviewers due to the depth of their knowledge of the field and the nuanced understanding needed to connect disparate pieces of evidence in the context of an individual institution. Can human peer review alone meet the demand for accreditation services? Currently about 1075 museums have been accredited by AAM, compared to the 4000 that have taken the Pledge of Excellence and the estimated 35,000 museums in the US identified by the Institute of Museum and Library Services. Slightly more than 1000 people serve as peer reviewers for AAM Accreditation, and a nine-member commission makes all the final decisions regarding accreditation. Could AI enable the AAM program to increase the number of museums served, without a proportionate increase in cost?
AI could help with the quality, as well as the quantity, of reviews. The challenge faced by volunteer peer reviewers is to approach each organization with true objectivity - a peer reviewer who works at a large well-funded institution must surface and set aside his/her implicit biases when reviewing an institution with a different profile. While this system has worked well in the past, can we make it work better with the application of AI?
Alexa, Google Assistant, and Siri are all able to interpret natural language commands, IBM’s Watson won Jeopardy, and Netflix makes eerily on-target recommendations for movies and shows I will like. It’s all just algorithms, you might say, and you are correct. But these algorithms are increasingly complex and sophisticated. Today’s AI-based programs use machine learning techniques (including rubrics and rules) to acquire information and adapt through experience. Our peer reviewers do the same. What if we shift our model and use AI to do what it does best - parse through reams of qualitative and quantitative data submitted for accreditation and provide peer reviewers with an initial assessment of alignment to the standards? How would this all work?
In the same way we currently train peer reviewers, an accreditation AI would be trained using data elements, evidence, rubrics and decisions made by past peer reviewers. We would then run simulations to enable the AI to learn alongside the peer reviewers currently auditing institutions and compare the AI’s recommendations with those of the peer review team and make adjustments. Eventually, the AI would be able to provide absolute consistency in the identification of areas for improvement and misalignment with standards. It could also identify practices that correlate with positive outcomes, thus improving the flow of information on best practices throughout the field. As in many fields beginning to adopt AI (medicine, engineering, customer service), I believe the technology would supplement, rather than supplant, the human role. Peer reviewers will use the AI’s recommendations to collect and refine information requested and reviewed during the on-site visit and use the algorithm’s analysis to inform their recommendations and the decision to grant or not grant accreditation.
As we move further and further into an AI-enabled future, we have an opportunity to review and reframe how we leverage technology to improve our industries. Application of AI to assist in the peer review process of accreditation is one way we can leverage the power of technology and help humans focus on high-value activities. How else can museums benefit from the novel application of technology?