Blog Post

VIF's Process Lab: Synthesizing and applying feedback

VIF's Process Lab: Synthesizing and applying feedback

The midpoint grant workshop is a unique and effective component of the DML competition. It is a rare opportunity to get feedback from a diverse group of experts in the education, design, business and policy fields. It is also invaluable to hear and provide feedback to our peer grantees. It was important to see how our project was just one part of a larger discussion about trusted learning communities — January’s workshop was very helpful both in how best to present this work and in receiving feedback on the work itself.


Our major takeaways to address include:

Student feedback: Given that students currently don’t have access, can teachers gather that feedback and report?
We are considering how to redefine what student evidence looks like. Rather than a teacher providing a record of something a student made, that teacher could provide a record of student feedback. Teacher self reporting is easiest to implement but depending on the format is also subject to tampering. We will make it a priority to share with teachers that student feedback is another lens to consider iteration through, and it contributes to a more student-led, investigative classroom environment.

Badge value: The value of the badge is in the transaction not in the badge itself. What is the teacher looking for? What is the district looking for? How are these matched? If we are just pushing out badges, it is meaningless.
Our system is designed to make competency goals clear to the teacher, community and peer reviewer at every step of the build-review-iterate process.

This is a part of a larger programmatic piece that we are working on in close collaboration with our partner districts. This is outside the scope of this project but we will include in our final report as recommendations for other learning organizations who will use the plug-in for their systems.

Collective badges: How do we move away from individualism and move toward building collaborative cultures? How can we use badges to build supportive cultures?
Our developer is laying the groundwork to allow for user choice in building groups and deeper collaboration. We are seeking a balance between being too feature-heavy, while considering important features and making sure we are operating from a technical foundation that allows for expansion.

The system is also being built to collect data around each interaction, even though that data won’t be immediately displayed. This interaction data can show community growth and progress, even though it’s not immediately tied to any sort of badge.

Peer reviewers and trust: Trusting the peer reviewers is very important. If the peer reviewer training is solid, teachers will trust the reviews and find value in the relationships.

All of our peer-review training will be completely open, transparent and available for participation from anyone in the community. Peer reviewers will also be visually identified in the system.

Our system will also allow for authors to reach out to specific, trusted community members for formative review

Value-add: Where is the value for diverse stakeholders in any education system? For example, how will districts use this kind of data? How will teachers use if they are not adopting badges (tracking competencies) yet?
This is also a programmatic piece that we are working on in close collaboration with our partner districts. This is outside the scope of this project but we will include in our final report as recommendations for other learning organizations who will use the plug in for their systems,

Incentivization: We must ensure that peer review is incentivized all along the way.
As training for peer reviewers is developed, we are focusing in on review as a method to get to a competency-driven goal. We believe that all stakeholders in the system have to be more clear about goals or roles that a user is working toward, and this will drive interest in earning competencies with purpose.

Iteration: Iteration must be at the core. Like the commenting on individual artifacts, we need to make different versions available.
We will ensure that both the system and the rubric address reflection (why iterate) and student artifacts (how iterate). At this time, we don’t anticipate adding a robust versioning system, although it’s certainly something to consider for future extension.

Reviewer participation: How are we going to make sure that the reviewers are there?
The training module development is critical, and we will seed our pool of reviewers with invested users. We plan to have district administrators and principals identify influencers, and also lean on our internal group of advocates who opt in to providing platform feedback each year.

Forking lesson plan content: One additional item that came up during the feedback session lead to us discussing the idea of forking lesson plans. The issue of IP/copyright/ownership came up of course but it seemed like everyone was very much in favor of that kind of functionality. As long as it could be implement in a fair way that people opt into.

We plan to implement a creative commons license on VIF-created content and provide avenues to being testing forked, granular content in the next phase of our project (beyond the scope of the grant work).


Additionally, our team had the opportunity to sit down with teachers, workshop paper prototypes of our system and most importantly — listen to their needs and values for a constructed collaborative work space. The system we are proposing was initially perceived as another add-on to what can be an already process-intensive approach to lesson planning in the learning center. We need our system to meet them where they are in their planning practice and not shoehorn them into our prescriptive template and process.

These teachers were clearly comfortable collaborating and sharing feedback with peers, but in much less formal ways, such as in grade level team planning or just spontaneous conversation. We also learned more about the expectation that different schools or districts have for lesson plan formats, from the standpoint of formatting and also document type. We were interested in these questions because we want to grow our system to meet teachers where they are.

The key takeaways for us were:

  • Teachers are willing to share in-process work and participate in peer feedback if able to alert specific (trusted) colleagues for feedback
  • Teachers want credit for participating as a reviewer — acknowledge peer review as a competency to grow over time
  • They would love ability to leverage mobile to snap evidence and jot quick notes from phone to roll into lesson planning later
  • Most loved the value-add of forked, granular content; this would draw in participation to the community around artifacts. To execute this well, content must be well tagged, and the more credibility is clear, the better
  • The opportunity exists for the process lab to be the space for lesson plan building, but need flexibility with exporting to google docs because of constraints in school or district protocols

Most of these ideas were considered in the building process of the software, but it became more clear through this discussion how we can best position the affordances of the system to increase participation. Some features, such as forked content or export to google docs, we will want to roadmap for later in the development process of the system.

A final significant takeaway from this discussion was establishing trust with a cohort of teachers who are willing to continue looking at prototypes with us through the spring. They were genuinely interested in the work that we’re doing, and saw value that could be added to their own practice.

51

No comments