I'm just back from the Association of Moving Image Archivists annual conference where I spoke as part of a panel on the Media Ecology Project, an effort initiated by Mark Williams at Dartmouth to increase scholarly access to archival media. MEP is something I've mentioned in these posts before, but the video below gives a much better idea of what we're trying to accomplish with the project. We'd love to get some feedback from HASTACers!
The video goes into more depth but, as I mentioned, the core goal of MEP is finding ways to increase scholarly access to and use of media held in archives around the world. It does that by making the media held in those archives more discoverable through enhanced metadata and citation capabilities. We're also tying archival media into analysis and publication tools like Scalar and MediaThread more directly than has been possible with pre-semantic web software.
But the question remains: if we are increasing the amount of metadata scholars can use to find and index media, where is the new metadata coming from? As more and more digitization projects come online the quantity of video available has outstripped the amount of time archivists can devote to making it discoverable. A key concern of any attempt to generate third-party metadata about archival collections is a question of what qualifies as good information. Crowdsourcing is an option that we'd like to make more common, but there are issues of provenance any time you try to crowdsource basic data.
We've explored several potential solutions to the provenance question, ranging from algorithmic trust systems to OBI badges awarded to those who produce great metadata. But what would HASTAC do? Federated login systems? Badges? Mechanical Turks? Expert-only input (who is an expert)? I'm very interested to hear what kind of ideas HASTAC readers and scholars have for fact-checking crowdsourced metadata.