• Aucun résultat trouvé

Preface: Live Studies at REFSQ 2019

N/A
N/A
Protected

Academic year: 2022

Partager "Preface: Live Studies at REFSQ 2019"

Copied!
1
0
0

Texte intégral

(1)

Live Studies at REFSQ 2019

Maya Daneva

University of Twente, The Netherlands m.daneva@utwente.nl

The REFSQ conference has been promoting the application of empirical research methods to the exploration and evaluation of requirements engineering phenomena.

As part of this, one of the traditions of the REFSQ conference is the organization of a special plenary session for conducting a “live study” involving the conference attendees on a voluntary basis. A live study could be a (controlled) experiment, a focus group, an interview-based study or an online survey related to requirement engineering.

REFSQ 2019 received four submissions for “live studies”. Two independent reviewers reviewed each of the submissions. A sub-committee then prioritized the live study proposals based on various factors and two proposals were selected for inclusion in the conference program:

"Users perceptions on security and satisfaction requirements of context-aware applications: A online questionnaire“ authored by Denisse Muñante and Nelly Condori-Fernandez.

“Assessing the Effect of Learning Styles on Risk Model Comprehensibility: A Controlled Experiment”, authored by Katsiarina Labunets and Nelly Condori-Fernandez

.

The study authors introduced their study to the REFSQ 2019 attendees and conducted their studies according to their proposed plans. Preliminary results of the two studies will be shared with the REFSQ 2019 participants within the allotted session on Thursday, March 21, 2019.

Acknowledgements

I am particularly grateful to the REFSQ 2019’s Programme Co-Chairs, Eric Knauss and Michael Goedicke, for including “Live Studies” in the overall REFSQ 2019 programme. I also thank all the members of the Live Study Programme Committee.

Finally, we are much indebted to all the volunteer participants in the two studies. It is their expertise and insights into the subject matter that contributes to the authors’s efforts to answer the research questions posed in the two studies.

Live Study Programme Committee

Maya Daneva (Chair) - University of Twente, The Netherlands Dan Berry – University of Waterloo, Canada

Nelly Condori-Fernandez, University of Amsterdam, the Netherlands and University of Coruna, Spain Fabiano Dalpiaz, University of Utrecht, The Netherlands Andrea Herrmann, Herrmann and Ehrlich, Germany Jennifer Horkoff, Chalmers University, Sweden

Nazim Madhavji, University of Western Ontario, Canada

Références

Documents relatifs

We experimented with the combination of content and title runs, using standard combination methods as introduced by Fox and Shaw [1]: combMAX (take the maximal similarity score of

In order to make it possible for our answer re-ranking module (described in section 6) to rank answers from different streams, we took advantage of answer patterns from

Given a topic (and associated article) in one language, identify relevant snippets on the topic from other articles in the same language or even in other languages.. In the context

Language restricted Since we know the language of the topic in each of the translations, and the intention of the translated topic is to retrieve pages in that language, we may

When we examined the assessments of the answers to the three different question types, we noticed that the proportion of correct answers was the same for definition questions (45%)

For both English and Portuguese we used a similar set of indexes as for the monolingual runs described earlier (Words, Stems, 4-Grams, 4-Grams+start/end, 4-Grams+words); for all

Four streams generate candidate answers from the Dutch CLEF corpus: Lookup, Pattern Match, Ngrams, and Tequesta.. The Table Lookup stream uses specialized knowledge bases constructed

In effect, we conducted three sets of experiments: (i) on the four language small multilingual set (English, French, German, and Spanish), (ii) on the six languages for which we have