User Tools

Site Tools


asking_and_answering_questions_during_a_programming_change_task

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

asking_and_answering_questions_during_a_programming_change_task [2014/01/08 12:21]
yann
asking_and_answering_questions_during_a_programming_change_task [2018/03/04 02:53]
Line 1: Line 1:
-====== Abstract ====== 
  
-Little is known about the specific kinds of questions programmers ask when evolving a code base and how well existing tools support those questions. To better support the activity of programming,​ answers are needed to three broad research questions: (1) what does a programmer need to know about a code base when evolving a software system? (2) how does a programmer go about finding that information?​ and (3) how well do existing tools support programmer?​s in answering those questions? We undertook two qualitative studies of programmers performing change tasks to provide answers to these questions. In this paper, we report on an analysis of the data from these two user studies. This paper makes three key contributions. The first contribution is a catalog of 44 types of questions programmers ask during software evolution tasks. The second contribution is a description of the observed behavior around answering those questions. The third contribution is a description of how existing deployed and proposed tools do, and do not, support answering programmers'​ questions. 
- 
-====== Comments ====== 
- 
-//​Yann-Gaël Guéhéneuc,​ 2014/​01/​08//​ 
- 
-In this paper, the authors seek to provide the typical questions asked by developers during real change tasks. Knowing such questions could help designing tool support that is more efficient than today'​s tool support. It could also help refining models of program comprehension. Thus, it complements nicely previous works on models of program comprehension,​ programmers'​ questions, and empirical studies of change. This paper includes a very interesting list of references but unfortunately does not relate in details the cited previous work with the 44 questions discussed in the paper. In particular when it describes other works on programmers'​ questions, it does not explicitly relate previous works by: 
-  - Jonhson and Erdem who studied Usenet newsgroup and classified questions as goal oriented, symptom oriented, and system oriented; 
-  - Herbsleb and Kuwana who studied design meetings and classified questions according to their targets (evolve, task assignment, interface, realization,​ and identity), attribute (who, what, how, why, when), and lifecycle stages (requirements,​ design, implementation,​ maintenance);​ 
-  - Letovsky who studied programmers and classified their conjectures as why, how, what, whether, and discrepency;​ 
-  - Erdos and Sneed, based on their personal experience, who reported questions such as "where is a particular subroutine/​procedure invoked"​ and "what are the arguments and results of a given function";​ 
-  - Erdem et al. who reused their study of Usenet newsgroups and classified questions based on their topic, type, and relation; 
-  - Ko et al. who studied co-located software teams and classified questions as about writing code, submitting change, triaging bugs, reproducing a failure, understanding execution behaviour, reasoning about a design, and maintaining awareness; 
-It would have been interesting to relate all these different sets of questions and identify their intersections,​ redundancies,​ and gaps. In particular, there seems to be little interest about the behaviour... Other interesting questions pertain to the possible errors when mapping (top-down) or grouping (bottom-up) concepts and the stopping condition. It seems that few authors studied the errors that developers make when understanding programs (and why they make them). They also include relating the different sets of questions to comprehension theory/​models,​ such as the theory of cognitive support or distributed cognition. 
- 
-The authors observed two sets of developers: pair programmers in an artificial environment (pairs of students performing change tasks on an unknown system) and in a real environment (professional developers performing changes on a system that their company develops). The use a grounded-theory analysis to code the collected audio recordings (and others) and, as categories emerge, to perform further selective sampling and gather more variations about the categories. "The aim here is to build rather than test [a] theory and the specific result of this [analysis] is a theoretical understanding of the situation of interest that is grounded in the data collected."​ 
- 
-Although some explanations are provided, I am confused about the choice of pairs of programmers in the artificial environment. In both environment,​ the limits of the used think-aloud method are not discussed: in particular, think aloud and problem with social desirability. This limit could explain the lack of "more involved"​ question regarding design choices (not the how/what but the why) and the behaviour of the systems. 
- 
-The authors considered a code source as a graph of entities and categorised the questions as finding focus points, expanding focus points, understanding a subgraph, and questions over groups of subgraphs. Surprisingly,​ they mentioned neither design patterns as typical cases of "​subgraph"​ nor Sim's structured transition graphs to explain the "​jumps"​ that developers make between categories. The 44 questions are as follows: 
-  - Finding Focus Points 
-    - fff 
asking_and_answering_questions_during_a_programming_change_task.txt · Last modified: 2019/10/06 20:37 (external edit)