Summary
Fraser et al.’s paper discusses how difficult some software programs are, especially for novices, and a possible solution to help ameliorate the steep learning curves they often possess. Their solution is the DiscoverySpace interface – a tool for the creation of macros to help novice users learn new capabilities of the software they want to use. They further test this with a prototype that is used to help participants learn how to use the popular image editing program Photoshop. After finding 115 different macros from various sources, they designed their workflow and interface and ran a study to evaluate it. They found, in a study with 28 participants, that found that participants were significantly less likely to be discouraged or say they couldn’t figure something out with the DiscoverySpace tool installed with Photoshop.
Personal Reflection
This paper provides a fantastic workflow for easing novice users into using new and difficult programs. I liked it because it provides a slightly more customized experience than the youtube video walkthroughs and online tutorials that I’m accustomed to using. I would even like to actually use this interface for Photoshop, as its one program I attempted to break into a few times early on in my college career, always failing as there were too many features too obliquely described.
I was surprised that the authors removed the suggestions that contained pauses and dialogue. I would have expected those situations to be better able to present the user with the appropriate background for the effects they wanted to do. However, when they explained their reasoning – that the explanations often were not enough and confused the users – it made a lot more sense to remove those altogether.
I’m not sure how I feel about their comment that later on they are supporting “paid” actions, where the macros do something that can be considered of a higher quality and thereby require some form of compensation for the macro creator. I don’t think an academic paper is the place for that sort of suggestion, as it doesn’t really add to the software or approach the paper presents. All tools that academic papers present could be used in commercial software, so why would that be of particular note in this paper?
Lastly, and this is more of a quibble, I was more off put than I thought I would be by the text-aligned images as seen in figures 3 and 4. The alignment is more difficult to read than it would be in a casual magazine-type environment, and should be reserved for that sort of publication.
Questions
- Do you think the 115 presented actions are enough of a testbed for the prototype tool? I.E., should they have more to present a better amalgamation of possible uses? How would they generate more?
- Beyond using image analysis to present some initial ideas to the users, what other ways might you improve their approach to make it more automated, or do you think there’s enough or too much automation already?
- What other programs could use this approach, and how might they integrate it into their platforms?
I relate a lot to your reflections.
I also tried many times to learn Photoshop only to come back to the comfort of MS paint.
I’d also probably use this software if it comes out of prototype phase.
And yes, the image alignments threw the sentences all over the place, which made me cringe.
Overall, I thought the study was done in a bit of a rush.
Regarding your third question, one of the big features of this software is showing previews of an action’s results.
Therefore, I feel tools with a lot of visuals could benefit from this approach.
I think video editing software is a good fit.
An add-on approach like what the authors used in their DiscoverySpace would suit nicely.