: Using games with a purpose and bootstrapping to create domain-specific sentiment lexicons In: Berendt, Bettina; de Vries, Arjen; Fan, Wenfei; Macdonald, Craig; Ounis, Iadh; Ruthven, Ian (Hg.): Proceedings of the 20th International Conference on Information & Knowledge Management and co-located workshops: International Conference on Information and Knowledge Management, CIKM 2011: Glasgow, 24.-28. Oktober: 2011. Association for Computing Machinery: New York, NY: Association for Computing Machinery (ACM), S. 1053-1060. Online verfügbar unter https://doi.org/10.1145/2063576.2063729, zuletzt geprüft am 04.12.2020
Abstract: Sentiment detection analyzes the positive or negative polarity of text. The field has received considerable attention in recent years, since it plays an important role in providing means to assess user opinions regarding an organization's products, services, or actions. Approaches towards sentiment detection include machine learning techniques as well as computationally less expensive methods. Both approaches rely on the use of language-specific sentiment lexicons, which are lists of sentiment terms with their corresponding sentiment value. The effort involved in creating, customizing, and extending sentiment lexicons is considerable, particularly if less common languages and domains are targeted without access to appropriate language resources. This paper proposes a semi-automatic approach for the creation of sentiment lexicons which assigns sentiment values to sentiment terms via crowd-sourcing. Furthermore, it introduces a bootstrapping process operating on unlabeled domain documents to extend the created lexicons, and to customize them according to the particular use case. This process considers sentiment terms as well as sentiment indicators occurring in the discourse surrounding a articular topic. Such indicators are associated with a positive or negative context in a particular domain, but might have a neutral connotation in other domains. A formal evaluation shows that bootstrapping considerably improves the method's recall. Automatically created lexicons yield a performance comparable to professionally created language resources such as the General Inquirer.
: Augmenting Lightweight Domain Ontologies with Social Evidence Sources In: Tjoa, A. M.; Wagner, R. R. (Hg.): Twenty-First International Workshop on Database and Expert Systems Applications: 21st International Conference on Database and Expert Systems Applications (DEXA): Bilbao, 30. August - 3. September: 2010: Institute of Electrical and Electronic Engineers (IEEE), S. 193-197. Online verfügbar unter https://doi.org/10.1109/DEXA.2010.53, zuletzt geprüft am 12.11.2020
Abstract: Recent research shows the potential of utilizing data collected through Web 2.0 applications to capture changes in a domain's terminology. This paper presents an approach to augment corpus-based ontology learning by considering terms from collaborative tagging systems, social networking platforms, and micro-blogging services. The proposed framework collects information on the domain's terminology from domain documents and a seed ontology in a triple store. Data from social sources such as Delicious, Flickr, Technorati and Twitter provide an outside view of the domain and help incorporate external knowledge into the ontology learning process. The neural network technique of spreading activation is used to identify relevant new concepts, and to determine their positions in the extended ontology. Evaluating the method with two measures (PMI and expert judgments) demonstrates the significant benefits of social evidence sources for ontology learning.
: Heuristics for the evaluation of library online catalogues In: Katsirikou, Anthi: Proceedings of the 2nd International Conference on Qualitative and Quantitative Methods in Libraries: QQML 2010: Chania, Kreta, 25.-28. Mai: 2010
Abstract: Under the growing popularity of new search paradigms (e.g. faceted search) and modern web technologies many libraries have integrated new interaction possibilities into their websites. In particular the online catalogues of libraries have become more dynamic and interactive than ever before. But up to now there is a lack of empirical evidence which of these new components really deliver an added value for the users and especially it is not entirely clear how these elements should be designed and integrated into library websites. These new concepts impose not only new challenges for the implementation of library websites but also for usability evaluations of these sites. A very common instrument to assess the quality of web interfaces is the so called heuristic evaluation. However evaluation criteria, like for example the 10 heuristics of Nielsen are too generic for an in-detail analysis of certain components. Within the project “E-lib.ch – Swiss Electronic Library” a modular list of criteria was elaborated, which allows considering the particular aspects of online catalogues and which can also be used for self-evaluations by the library staff.