Geben Sie einen Suchbegriff ein oder verwenden Sie die Erweiterte Suche um nach Autor, Erscheinungsjahr oder Dokumenttyp zu filtern.
(2023) : Die «Campus Brauerei» . Ein neues Labor an der FH Graubünden . Blog (FHGR Blog) . Online verfügbar unter https://blog.fhgr.ch/blog/die-campus-brauerei-ein-neues-labor-an-der-fh-graubuenden/ , zuletzt geprüft am 31.08.2023
Abstract: An der Fachhochschule Graubünden wird künftig gebraut. Die Hochschule eröffnet im September ein neues Labor, welches offiziell als Labor für die Digitalisierung in der Getränkeindustrie bezeichnet wird. Inoffiziell ist das neue Labor jedoch bereits als «Campus Brauerei» bekannt, denn dort wird künftig neben anderen Forschungstätigkeiten der Hochschule vor allem auch Bier hergestellt.
(2020): Das erweiterte Potenzial von Bildungsdaten (Einblicke in die Forschung). Online verfügbar unter https://www.fhgr.ch/fileadmin/publikationen/forschungsbericht/fhgr-Einblicke_in_die_Forschung_2020.pdf, zuletzt geprüft am 09.04.2021
Abstract: Forschungsdaten zu Bildung und Lernen sind vielfältig. Doch wenn sie ohne eine Verknüp-fung isoliert für sich bleiben, kann ihr Potenzial nur eingeschränkt genutzt werden. Wir werden relevante Datensätze erkennen und zusammenführen und damit das Potenzial für wissenschaftliche Analysen erhöhen.
(2019): Computer-based Assessment aus Chur für die Schweiz – und darüber hinaus. In: Wissensplatz (1), S. 18-19. Online verfügbar unter https://www.fhgr.ch/fhgr/medien-und-oeffentlichkeit/publikationen/wissensplatz/februar-2019/, zuletzt geprüft am 14.02.2019
Abstract: In länderübergreifenden und auch kleineren Vergleichsstudien, die von der Erziehungswissenschaft durchgeführt und von der Bildungspolitik genutzt werden, kommt immer mehr das computerbasierte Testen zum Einsatz. Für die im Auftrag der Erziehungsdirektorenkonferenz durchgeführte Studie zur Überprüfung der Grundkompetenzen, welche die Leistungen von Schülerinnen und Schülern in allen 26 Kantonen erheben soll, übernimmt die HTW Chur das Datenmanagement.
(2019): Fully Automated Assessment. ATP Innovations in Testing. Association of Test Publishers. Orlando, 18. März, 2019
Abstract: Time has come to put the pieces together to automate the complete assessment cycle. Using modern computer science methods, it is possible to set up an assessment without any human intervention. We outline a proof-of-concept and demonstrate a prototype assessment in the session. Fully automated assessment (FAA) has a huge potential in different areas like self-assessment, screening, learning and other low-stake assessments. FAA is domain-independent (given some pre-requisites are fulfilled) and scales very well. The idea behind FAA is to automate all steps of the assessment cycle. Certain steps have already been handled elsewhere, like automated item generation (AIG) and computer-automated testing (CAT). Fewer publications can be found on aspects like automated test assembly of uncalibrated items and automated result evaluation or automated item calibration. In our work, we combine all steps to come to a truly unsupervised, fully automated assessment from start to end.
(2017): A School Survey Management System for Educational Assessments in Switzerland. IASSIST. International Association for Social Science Information Services & Technology. Lawrence, 24. Mai, 2017. Online verfügbar unter https://iassistdata.org/conferences/archive/2017, zuletzt geprüft am 17.04.2020
Abstract: Currently two large educational assessment programs exist in Switzerland which are institutionalized by the cantons: PISA, the well-known Program for International Student Assessment (PISA), an OECD initiative that involves a large number of nations and the Swiss National Core Skills Assessment Program (in German: ÜGK – Überprüfug der Grundkompetenzen). Following completion of the PISA 2015, the core skills assessment program was initiated to focus on the assessment based on Swiss measurement instruments to get more results with national relevance.Both programs are computer based assessments since 2016 but the IT systems for both programs are not yet optimized for supporting the fieldwork in an adequate manner. Therefore a software tool will be developed to support on the one hand the administration and the field monitoring during the data collection. On the other hand the idea is to optimize the data documentation process. In this presentation we would like to show which processes should be modeled and where documentation und metadata could be generated as a byproduct without additional effort. This includes in particular also paradata which provide interesting opportunities for analysis.
(2016): Design Considerations for DDI-Based Data Systems. In: IASSIST Quarterly 39 (3), S. 6-11. Online verfügbar unter https://doi.org/10.29173/iq126, zuletzt geprüft am 17.04.2020
Abstract: Growing amounts of available data and new developments in data handling result in the need for advanced solutions. Therefore, organizations providing data have to focus more and more on technical and design issues. In order to keep the effort and expense low, data storage and data documentation must go hand in hand. This paper aims to help decision-makers by highlighting two promising approaches - relational databases for data storage and the DDI (Data Documentation Initiative) standard for data documentation. Possible interactions between both solutions are discussed, whereby the focus is on the advantages and disadvantages of representing DDI in its native XML format vs. the storage format of relational databases. In addition, three use cases are presented to provide further clarity on design considerations for DDI-based data systems: (1) agencies with existing relational database structures, (2) agencies with homogeneous DDI input and output, and (3) agencies with mixed environments.
(2016): Assessment of the Swiss National Objectives in Education. 8th Annual European DDI User Conference (EDDI16). GESIS – Leibniz Institute for the Social Sciences. Köln, 2016
Abstract: In Switzerland, the main responsibility for education and culture lies with the 26 cantons. In 2006 it was decided that some cornerstones of the Swiss education system has to be harmonized nationally. The Swiss national objectives in education are some of these cornerstones. They describe which competencies students in all cantons should have obtained after 2, 4 and 9 years of school. In 2016 first surveys about the national objectives in education were conducted. More are planned for 2017. For the management of these surveys software tools will be developed which support documentation with DDI for the questionnaires. Further metadata standards are currently in discussion to be used because no current standard alone seems to meet all demands. In our presentation we wouldl ike to present where DDI can be supportive in the area of educational assessments and where gaps within DDI has to be filled for this special field.
(2016) : Management of Metadata. An Integrated Approach to Structured Documentation In: Blossfeld, Hans-Peter; Maurice, Jutta von; Bayer, Michael; Skopek, Jan (Hg.): Methodological issues of longitudinal surveys: The example of the National Educational Panel Study: Wiesbaden: Springer VS, S. 627-647
(2014): PIAAC Germany 2012. Technical Report. Münster: Waxmann. Online verfügbar unter https://www.waxmann.com/waxmann-buecher/?tx_p2waxmann_pi2%5bbuchnr%5d=3113&tx_p2waxmann_pi2%5baction%5d=show, zuletzt geprüft am 11.09.2020
Abstract: he Programme for the International Assessment of Adult Competencies (PIAAC) is a large-scale initiative of the Organization for Economic Cooperation and Development (OECD) that aims at assessing key adult competencies considered important for individual and societal success. This technical report describes how the PIAAC survey was conducted in Germany. It provides information on the PIAAC instruments: the background questionnaire and the cognitive assessment. Furthermore, it describes sampling, fieldwork, weighting, and nonresponse bias analyses. The report concludes with an overview of the data management processes and data products as well as a brief evaluation of the overall data quality.
(2013): An Update on the Rogatus Platform. 5th Annual European DDI User Conference (EDDI135). Réseau Quetelet. Paris, 3. Dezember, 2013
Abstract: Rogatusis an open source questionnaire and metadata solution basing on the DDI 3.2 and SDMX standard and using the Generic Longitudinal Business Process Model (GLBPM) to specify its tool chain. Currently the project is supported by DIPF, TBA21, OPIT, IAB and GESIS and creates more and more interest especially with NSIs and data collection agencies. This presentation gives an update on new developments since NADDI 2013 including the data management portal, coding support for ISCED, improvements on the case management system, compatibility to other platforms like Colectica or MMIC plus an outlook on the mobile sampling client.
(2013): Proposing a Metadata Solution over Multiple RDCs in the German Context. 5th Annual European DDI User Conference (EDDI135). Réseau Quetelet. Paris, 3. Dezember, 2013
Abstract: There is a wide ranch of research data available in Germany. Within the last decade a great number of new Research Data Centre (RDC) originated, offering a variety of different information for scientific research. Currently 25 RDCs are accredited under the umbrella of the German Data Council (RatSWD). Having does data available is a good thing for researchers; at the same time finding the best data for a given project is not easy at all. Currently they have to look at data documentations in different formats and spread over 25 homepages. Researchers need a single point of access and a structured way to search the available data. Information about datasets, research potential of variables and about how to access data are important in that regard. A reliable and machine-readable standard used by all RDCs would enable the use of software tools that allow researchers to effectively discover the richness of research data available in Germany. The case Germany is only an example for the need for a standard like DDI and it shows that goal of having an effective way to explore the landscape for research data is not yet reached. DDI still must be used by more data providers.
(2013): Harmonizing Between Different Agencies Using DDI Profiles. 5th Annual European DDI User Conference (EDDI135). Réseau Quetelet. Paris, 4. Dezember, 2013
Abstract: Developing software to support the DDI-L standard provides a challenge to agencies. The DDI-L standard is in most cases much too vast for most individual tool requirements. DDI-L 3.1 contains more than 900 main nodes in its schema while a survey software most likely only needs 50-60 of them. The idea is therefore to use DDI Profiles to specify a sub-set of requirements for the individual purpose. A software solution for surveys could therefore use two different DDI Profiles to express its compatibility to other similar software, e.g. the profiles "Survey Design" and "Data Collection". For other parts of the lifecycle which are relevant to other agencies similar DDI Profiles can be specified (e.g. "Administrative Data", "Processed Data" and "Transaction Data"). During the design of software tools like Rogatus (DIPF, TBA21, OPIT and IAB) and DDI on Rails (SOEP) these issues encountered as the software is supposed to be compatible with other tools like Colectica and Questasy. Therefore the process to create DDI Profiles for this harmonization has begun. Furthermore discussions with ABS to support their GSIM-based DDI Profiles are on the way as well.
(2013): Administrative Data in the IAB Metadata Management System. North American Data Documentation Conference (NADDI). University of Kansas. Lawrence, KA, 2. April, 2013. Online verfügbar unter http://hdl.handle.net/1808/11064, zuletzt geprüft am 27.11.2020
Abstract: The Research Data Centre (FDZ) of the German Federal Employment Agency (BA) at the Institute for Employment Research (IAB) prepares and gives access to research data. Beside survey data the IAB provides data deriving from the administrative processes of the BA. This data is very complex and not easy to understand and use. Good data documentation is crucial for the users. DDI provides a data documentation standard that makes documentation and data sharing easier. The latter is especially important for providers of administrative data because more and more other data types are merged with administrative data. Nevertheless there are also some drawbacks when using the DDI standard. Data collection for administrative data differs from data collection for survey data but DDI was established for survey data. At the same time the description of complex administrative data should be simple as possible. IAB and TBA21 are currently carrying out a project to build a Metadata Management System for IAB. The presentation will highlight the documentation needs for administrative data and show how they are covered in the Management System. In addition the need for DDI profiles, comprehensive software tools and future proofed data documentation for multiple data sources will be depicted.
(2013) : Development of the Cognitive Items: Technical Report of the Survey of Adult Skills (PIAAC), Section 2. Organisation for Economic Cooperation and Development: Paris. Online verfügbar unter http://www.oecd.org/skills/piaac/publications/#d.en.480407, zuletzt geprüft am 27.11.2020
Abstract: The implementation of the cognitive items for PIAAC faced several challenges. As stated before, PIAAC was the first international large-scale study to be conducted entirely on the computer. Therefore, existing link items from prior studies like IALS and ALL had to be converted from paper to computer. In addition, new items had to be developed both in literacy and numeracy to take advantage of the new possibilities of computer-based assessment. Further, an entirely new assessment domain, problem solving in technology-rich environments, was defined and items had to be developed. This was all done under a short timeframe in collaboration with participating countries that developed items on their own, as well as by combining item development teams from different countries. To cope with these challenges, a multifaceted approach was taken, reusing existing item development and test delivery software to the extent possible and developing easy-to-use new software to fill in the gaps.
(2012): Representing and Utilizing DDI in Relational Databases. In: SSRN Electronic Journal. Online verfügbar unter doi.org/10.2139/ssrn.2008184, zuletzt geprüft am 27.11.2020
Abstract: This document is primarily intended for implementers of DDI-based metadata stores who are considering different technical options for housing and managing their metadata. The Data Documentation Initiative (DDI) metadata specification is expressed in the form of XML schema. With version 3, the DDI specification has become quite complex, including 21 name spaces and 846 elements. Organizations employing DDI, or considering doing so, may want to store and manage the metadata elements in relational databases, for reasons of integration with existing systems, familiarity with the concepts of relational databases (such as Structured Query Language), systems performance, and/or other reasons; select only the subset of the available DDI metadata elements that is of utility to their work, and have the flexibility of capturing metadata they need that would not fit into the DDI model. This paper discusses advantages and disadvantages of the relational database approach to managing DDI. It also describes methods for modeling DDI in relational databases and for formally defining subsets of DDI to employ in this environment.
(2012): Modeling the Lifecycle into a Combination of Tools. 4th Annual European DDI User Conference (EDDI12). Bergen, 3. - 4. Dezember, 2012
Abstract: During the last years several different tools for DDI Lifecycle have been published. Nevertheless none of the current tools is able to cover the full lifecycle from beginning to end. This presentation wants to show how a survey process from creating a study from the scratch, designing the instruments, performing the data collection, handling the administrative processes, curating the data, disseminating the data, publication and at last data archiving for secondary usage could be handled with individual tools. Next to well - known programs like Colectica or Questasy this presentation will also introduce a first outlook into Rogatus QMMS - an open source toolset currently in development at DIPF with support of GESIS, TBA21 and OPIT. Rogatus consists of different DDI compliant applications (e.g. Questionnaire Builder, Translation Builder, Metadata Builder, Rogatus Portal) to support a multitude of survey processes.
(2012): Generic Longitudinal Business Process Model (DDI Working Paper Series. Longitudinal Best Practises). Online verfügbar unter doi.org/10.3886/DDILongitudinal05, zuletzt geprüft am 27.11.2020
Abstract: The intention of this document is to provide a generic model that can serve as the basis for informing discussions across organizations conducting longitudinal data collections, and other data collections repeated across time. The model is not intended to drive implementation directly. Rather, it is primarily intended to serve as a reference model against which implemented processes are mapped, for the purposes of determining where they may be similar to or different from other processes in other organizations. It may also prove useful to those designing new longitudinal studies, providing reminders of steps which may need to be planned. This is a reference model of the process of longitudinal and repeat cross-sectional data collection, describing the activities undertaken and mapping these to their typical inputs and outputs, which would then be described using DDI Lifecycle. With early roots in the social sciences, this model is grounded in human science. Elements such as anonymizing data (step 5.8 in Figure 5) and managing disclosure risk (step 8.6) relate directly to research on people, whether a biomedical study or a study on political attitudes. The model was developed with longitudinal surveys being the archetypal study type so many of the examples in this paper relate to surveys. Nevertheless, the model described here is intended to be applicable to a wider range of study types. This model should be just as applicable to a longitudinal series of experiments as a survey (see Blocket al.2011). This model is not intended to be comprehensive. It is intended to be descriptive of a generalized view of longitudinal data collection. This model may be extended or specialized to describe specific processes within an organization. Appendix A provides one example of extending this model by incorporating elements from another process model.
(2011): RemoteNEPS: data dissemination in a collaborative workspace. In: Zeitschrift für Erziehungswissenschaft 14 (2), S. 315-325. Online verfügbar unter https://doi.org/10.1007/s11618-011-0192-5, zuletzt geprüft am 04.12.2020
Abstract: Das Nationale Bildungspanel wurde ins Leben gerufen, um Längsschnittdaten für die Bildungsforschung zu erheben. Insgesamt werden 60.000 Personen innerhalb von sechs Startkohorten befragt und getestet, was zu einer sehr großen Datenmenge führen wird. Eine der größten Herausforderungen ist es daher, einen komfortablen und benutzerfreundlichen Datenzugriff anzubieten, der gleichzeitig hohe Datenschutzstandards erfüllt. Seit den 1990er Jahren wird in Deutschland eine moderne Infrastruktur für Forschungsdaten aufgebaut. Forschungsdatenzentren bieten vielfältige Möglichkeiten des Datenzugriffs. Das Nationale Bildungspanel wird sich einerseits an diesen Standards orientieren, zusätzlich aber die moderne Fernzugriffslösung RemoteNEPS entwickeln. Dieses Angebot erlaubt es dem Nutzer mittels einer Terminalserververbindung vom eigenen Rechner aus mit den Daten zu arbeiten. Die Forschungsdaten bleiben dabei in einer sicheren Umgebung auf den Servern des Nationalen Bildungspanels. Diese Rahmenbedingungen ermöglichen es, hochqualitative Mikrodaten unter Einhaltung eines hohen Sicherheitsstandards zur Verfügung zu stellen. Das Konzept RemoteNEPS gewährleistet jedoch nicht nur die Sicherheit der Daten, sondern es ermöglicht auch eine bessere Datennutzung. Dazu gehören die Förderung guter wissenschaftlicher Praxis und die Unterstützung kollaborativer Projekte in der Bildungsforschung.