Summary
Lasecki et al.’s paper “Real-time Captioning by Groups of Non-experts” explores a new approach of relying on a group of non-expert captionists to provide speech captions of good quality, and presents an end-to-end system called LE-GION: SCRIBE which allows collective instantaneous captioning for live lectures on-demand. In the speech captioning task, professional stenographers can achieve high accuracy. However, the manual efforts are very expensive and must be arranged in advance. For effective captioning, the researchers introduce the idea of having a group of non-expects to caption audio and merging their inputs to achieve more accurate captions. Their proposed SCRIBE has two components, one is an interface for real-time captioning designed to collect the partial captions from each crowd worker, and the other is real-time input combiner for merging the collective captions into a single out-put stream in real-time. Their experiments show that proposed solution is feasible and non-experts can provide captioning of good quality and content coverage with short per-word latency. The proposed model can be potentially extended to allow dynamic groups to exceed the capacity of individuals in various human performance tasks.
Reflection
This paper conducts an interesting study of how to achieve better performance of a single task via collaborative efforts of a group of individuals. I think this idea aligns with ensemble modeling in machine learning. The idea presented in the paper is to generate multiple partial outputs (provided by team members and crowd workers) and then use an algorithm to automatically merge all of the noisy partial inputs into a single output. Similarly, ensemble modeling is a machine learning method where multiple diverse models are developed to generate or predict an outcome, either by using multiple different algorithms or using different training data sets. Then the ensemble model aggregates the output of each base model and generates the final output. The motivation for relying on a group of non-expert captionists to achieve better performance beyond the capacity of each non-expert corresponds to the idea of using ensemble models to reduce the generalization error and get more reliable results. As long as the base models are diverse and independent, the performance of the model increases when the ensemble approach is used. This approach also seeks the collaborative efforts of crowds in obtaining the final results. In both approaches, even though the model has multiple human/machine inputs as its sources, it acts and performs as a single model. I would be curious to see how ensemble models perform on the same task compared with the crowdsourcing proposed in the paper.
In addition, I think the proposed framework in the paper may work for general audio captioning. I am wondering how it would perform in regards to domain-specific lectures. As we know, lectures in many domains, such as medical science, chemistry, psychology, etc. are expected to have some terminologies that might be difficult to capture by an individual without the professional background in the field. There would be possible cases that none of the crowd worker can type those terms correctly, which may result in the incorrect caption. I think the paper can be strengthened with a discussion about under what kind of situations the proposed method works best. To continue the point, another possibility is to leverage the advantages of pre-trained speed recognition models and crowd works to develop a human-AI team to achieve desirable performance.
Discussion
I think the following questions are worthy of further discussion.
- Would it be helpful if the recruiting process of crowd workers involves the consideration on their backgrounds, especially for some domain-specific lectures?
- Although ASR may not be reliable on its own, is it useful leverage it as a contributor to the input of crowd workers?
- Is there any other potential to add a machine-in-the-loop component in the proposed framework?
- What do you think about the proposed approach compared with the ensemble modeling that merges the outputs of multiple speech recognition algorithms to get the final results?
I believe taking into consideration the crowd workers’ backgrounds as pre-requisites would definitely be crucial. This could be as simple as someone who is an active researcher and actively attends conferences would likely be able to adequately understand domain-specific terms compared to someone outside the research area. This would likely be unnecessary since the task is to only identify words, however, it may fill the gap and help the assumption in figuring out a word that was not clear. Along with ensuring gaps are not missed, I do like the ensemble modeling approach since it ensures the whole system is not dependent on one singular user. This way any issues the singular user has can be automatically auto-corrected and verified by the other workers.