How do you identify the best way to structure your team? What kind of leadership setup should it have, how should team members collaborate and make decisions? What kind of communication norms should they follow? These are all important questions to ask when setting up a team but answering them is hard, because there is no right answer. Every team is different as a function of its team members. So it is necessary to iterate on these dimensions and experiment with different choices to try and see which setup works best for a particular team. Earlier work in CSCW attempts this with “multi-arm bandits” where each dimension is independently experimented with by a so called “bandit” (a computational decision maker) in order to collectively reach a configuration based on recommendations from each bandit for each dimension. However, this earlier work suffered from the problem of recommending too many changes and overwhelming the teams involved. Thus this paper proposes a version with temporal constraints, that still provides the same benefits of exploration and experimentation while limiting how often changes are recommended to avoid overwhelming the team.
This is my first exposure to this kind of CSCW literature and I find it a very interesting look into how computational decision makers can help make better teams. The idea of a computational agent looking at performance of teams and how they’re functioning and make recommendations to improve the team dynamics intuitively makes sense, because the team members themselves either can’t take an objective view because of their bias, or could be afraid to make recommendations or propose experimentation for fear of upsetting the team dynamic. The fact that this proposal is about incorporating temporal constraints to these systems is also a cool idea because of course humans can’t deal with frequent change because that would be very overwhelming Having an external arbiter do that job instead is very useful. I wonder whether if the failure of the human manager to experiment is because humans in general are risk averse, or the managers that were picked were particularly risk averse. This ties into my next complaint about the experiment sizes; both in the manager section and in the overall, I find the experiment size is awfully small. I feel like you can’t capture proper trends, especially socialogical trends such as being discussed in this paper, with experiments with just 10 teams. I feel a larger experiment should have been done to identify larger trends before this paper was published. Assuming that the related earlier work with multi-arm bandits also had similar experiment sizes, they should have been larger experiments as well before they were published.
- could we expand the dreamteam recommendations where, in addition to recommending changes in the different dimensions, it is also able to recommend more specific things. The main thing I was thinking of was if it is changing heirarchy to a leader based setup, it also recommends a leader, or explicitly recommends people vote on a leader, rather than just saying “hey, you guys now need to work with a leader type setup”?
- Considering how limited the feedback that DreamTeam could get, what else could be added than just looking at the scores at different time steps?
- What would it take for managerial setups to be less loss averse? Is the point of creating something like DreamTeam to help and push managers to have more confidence in instituting change, or is it to just have a robot take care of everything, sans managers at all?
In response to your first question, I think that’s a great idea. That would make the system even more helpful and would probably take out some of the possible pain points with the system. I think that if a team is dysfunctional and they try to use DreamTeam to help them get better and the bot suggests they choose a leader, it would be hard for the team that is already dysfunctional to decide on which of them should be the leader. Having the bot do it for them would make that process much smoother. I also think that the system could go even further and test different leaders to try to find the best one. They system could do similar things on the other dimensions, putting slight variations on each to try to find the best variation of each dimension.