Webinars in 2018
SKOS - visión general y modelado de vocabularios controlados
Antoine Isaac, Juan Antonio Pastor Sánchez
SKOS (Simple Knowledge Organization Systems) es la recomendación del W3C para representar y publicar conjuntos de datos de clasificaciones, tesauros, encabezamientos de materia, glosarios y otros tipos de vocabularios controlados y sistemas de organización del conocimiento. La primera parte del webinar incluye una visión general de las tecnologías de la web semántica y muestra detalladamente los diferentes elementos del modelo SKOS. La segunda parte aborda diferentes aproximaciones para la aplicación de SKOS para representar vocabularios controlados.
SKOS - Overview and Modeling of Controlled Vocabularies
Antoine Isaac, Juan Antonio Pastor Sánchez
SKOS (Simple Knowledge Organization Systems) is the recommendation of the W3C to represent and publish datasets for classifications, thesauri, subject headings, glossaries and other types of controlled vocabularies and knowledge organization systems in general. The first part of the webinar includes an overview of the technologies of the semantic web and shows in detail the different elements of the SKOS model. The second part shows different approaches for the application of SKOS to represent controlled vocabularies.
The Current State of Automated Content Tagging: Dangers and Opportunities
Joseph Busch
There are real opportunities to use technology to automate content tagging, and there are real dangers that automated content tagging will sometimes inappropriately promote and obscure content. We’ve all heard talks about AI, but little detail about how these applications actually work. Recently I’ve been working with clients to explore the current state of the art of so-called AI technology, and to trial several of these tools with research, policy and general news content. In addition to framing the debate about whether this is AI or automation, this talk will describe how to run trials with these tools, and will show some results from actual trials.
The Role of Dublin Core™ Metadata in the Expanding Digital and Analytical Skill Set Required by Data-Driven Organisations
Steve Brewer
Many areas of our world are being subject to digitalisation as leaders and policymakers embrace the possibilities that can be harnessed through the capturing and exploiting of data. New business models are being developed, and new revenue streams are being uncovered that require a solid and recognised data competence capacity. This process involves bringing together a range of traditional disciplines from computing and engineering to business management and data science. Facilitating successful collaboration amongst these participants in order to create new cyber-physical systems can be achieved through a range of tools, but chief among them will be the application of robust, trustworthy, and reusable data. The Dublin Core™ Metadata Initiative provides a well-established schema of terms that can be used to describe such data resources. As more organisations in diverse fields awaken to the benefits of digitalisation they will need to embrace data capture. Acquiring data science skills and competences at all levels of the organisation, and as an ongoing process over time, will be critical for their future. Whilst some elements will be specific, other skills will be common across sectors. Metadata foundations can obviously help with these commonalities. Applying a similar structured approach to understanding and supporting skills acquisition will contribute significantly to the future success of data-driven organisations.
Introduction to Metadata Application Profiles
Karen Coyle
Successful data sharing requires that users of your data understand the data format, the data semantics, and the rules that govern your particular use of terms and values. Sharing often means the creation of “cross-walks” that transfer data from one schema to another using some or all of this information. However, cross-walks are time-consuming because the information that is provided is neither standardized nor machine-readable. Application profiles aim to make sharing data more efficient and more effective. They can also do much more than facilitate sharable data: APs can help metadata developers clarify and express design options; they can be a focus for consensus within a community; they can drive user interfaces; and they can be the basis for quality control. Machine-actionable APs could become a vital tool in the metadata toolbox and there is a clear need for standardization. Communities such as Dublin Core™ and the World Wide Web Consortium are among those working in this area.
Understanding and Testing Models with ShEx
Eric Prud’hommeaux, Tom Baker
Every structured exchange requires consensus about the structure. The Shape Expressions (ShEx) language captures these structures in an intuitive and powerful syntax. From metadata description (e.g. DDI) to data description (e.g. FHIR), ShEx provides a powerful schema language to develop, test and deploy shared models for RDF data. This tutorial will explore the utility and expressivity of ShEx.Presented with side-by-side examples of schema and data, the audience will see how to use ShEx to solve every-day problems. The presentation will use multiple implementations of ShEx in order to leave the participants with enough familiarity to get started using ShEx on their own.
A Linked Data Competency Framework for Educators and Learners
Marcia Zeng
Linked Data is recognized as one of the underpinnings for open data, open science, and data-driven research and learning in the Semantic Web era. Questions still exist, however, about what should be expected as Linked Data related knowledge, skills, and learning outcomes, and where to find relevant learning materials. This webinar will introduce a competency framework that defines the knowledge and skills necessary for professional practice in the area of Linked Data, developed by the Linked Data for Professional Educators (LD4PE) project and funded by the Institute of Museum and Library Services (IMLS).