Access Statement. A statement, appended to publications and metadata records, explaining how a dataset may be accessed. An Access Statement should provide details of the data location, any licencing or other access conditions, a Digital Object Identifier (DOI), and, where necessary, who the main contact is. For data that are not openly available, legal and ethical reasons for this should be outlined.
Affiliation: The institutional or corporate address that an author includes alongside their name on the published version of a publication. Authors may sometimes list multiple affiliations on a publication.
Altmetrics: Altmetrics (i.e. 'alternative' metrics) collate quantitative and qualitative information about online mentions of research outputs.
Bibliometrics: The term 'bibliometrics' refers to 1) the systematic statistical, text mining, and network methods used to study research publications and/or 2) the established, quantitative indicators presented as results of these studies.
Citation: A citation results when 1) one publication references another in its bibliography and 2) this reference is subsequently recorded as a link between the two publications in a citation index. It is the latter requirement that give us different citation counts in, say, Scopus, Web of Science, and Google Scholar. Note that a citation is a one-way link going backwards in time from the 'citing' document to the 'cited' document.
Citation index: A citation index is a database that holds bibliographic records (i.e. records containing publication metadata but not the actual publications themselves) and captures citation links between theses records.
DOI: A Digital Object Identifier (DOI) is a unique, persistent, digital ID assigned to a research output. A well-established technical and social infrastructure exists for the worldwide assignment and administration of DOIs.
Data Repository (Data Centre). A facility where data may be deposited and preserved safely, with open or regulated access to it.
Data Management Plan (DMP). A description of how data produced by a research project will be collected, managed, shared, and preserved. A DMP should provide information about of collection/creation methods for any new data as well as details of existing data from other sources. Data storage, security, access controls, and archiving arrangements should also be included.
N.B. Most funders now require researchers to prepare a DMP as part of the bidding process and after funding has been secured. The DMP is, therefore, a live document and a key part of the project management documentation. Consequently, it will need to be revisited and updated regularly during the life-time of a project.
DMPOnline (https://dmponline.dcc.ac.uk/). A Data Management Planning tool developed by the Digital Curation Centre and subscribed to by the University of Surrey.
DMPOnline contains both generic and funder-specific templates that have been designed to take researchers through the process of preparing a DMP step by step.
h-index: The h-index is the number of documents (h) in a set of publications that have each received at least (h) number of citations. An h-index is most often calculated for an individual author's output. However, an h-index can actually be calculated for any set of publications, including those of departments and institutions. The h-index is no longer considered a suitable bibliometric for most purposes because it is not normalised and inherently discriminates on the basis of career length and research field.
Impact: Impact has multiple definitions in the research environment, so it is important that you clarify how the term is used in a given context. For example, in the context of the REF, impact refers primarily to the social and economic impact of research outputs. However, in bibliometric context, impact refers exclusively to academic impact, that is, impact within and across the research community. Indeed, we study four specific types of 'impact' that a publication can have within the research community. Two of these are time-based: impact at point of publication and impact post publication. The other two types have to do with links, either between research areas (subject bridging) or between people, institutions, or countries (collaboration).
Indicator: An indicator is a bibliometric or altmetric that provides an indication that something may be the case. For example, all citation metrics are indicators of potential post publication impact. But citation metrics can only ever be indicators, not measures: we can never be sure that we have captured all of the citations to a publication and we can never be sure why a document was cited. Indeed, the only thing we can be sure of is that a citation is a sign of being noticed, of being acknowledged formally by the research community.
JIF: The Journal Impact Factor (JIF) is a Web of Science metric updated yearly and normally released in June. JIF scores related to journals. The scores were originally intended to help librarians (with a limited budget!) decide which journals to stock. However, over time JIF came to be used as the default metric for judging research quality, especially in the physical and life sciences. This use was highly inappropriate for two reasons: 1) it took the journal quality as proxy for the quality of the individual article; and 2) because JIF is not a normalised metric, what counted as a 'good JIF' varied widely from field to field, automatically invalidating any attempt to compare research areas or multidisciplinary work. Fortunately, the use of JIF is now diminishing due to the wider availability of modern, normalised bibliometric indicators and increased familiarity with these amongst academics, research support staff, and funders.
Metadata. Data about data, i.e. data that describes an item such as a dataset. There are two main types of Metadata:
Network analysis: Network analysis, also known as network mapping or social network analysis, is a branch of graph theory that studies the structures formed by entities and the connections between them. In bibliometrics, network analysis is used to study citation networks, collaboration networks, and links between research areas.
Normalised metrics: Normalised metrics account for differences in subject areas, document types, and publications years across a set of publications. Although more complex to calculate, normalised metrics are essential if you want to compare two or more publications authors, or institutions. For example, if one publication is older than another, then the older publication has an automatic advantage: it has had more time to attract citations. If we don't normalise to take this into account, then any comparison we make between the two publications on the basis of citations, certainly, would be inherently unfair.
Open Data. The idea that certain data should be freely available for everyone to reuse and republish without copyright or other licensing restrictions.
ORCID: An Open Researcher and Contributor ID (ORCID) is a unique, persistent, digital ID assigned to researchers. The aim of an ORCID is to remove the need for name disambiguation to allow the accurate association of research outputs with the researchers who produced those outputs.
Publication metadata: Metadata is a type of data that provides information about data. In the context of publications, the full text of a document is the data. The metadata are fields that give us additional information related to the document such as the title, author name, affiliation, and unique identifier (e.g. DOI, pubmed ID, ISBN).
Publication window: A span of publication dates, normally the most recent four or five complete calendar years, used to build the publication set for a bibliometric analysis. For example, a publication window 2013-2016 includes only outputs published in 2013, 2014, 2015 and 2016.
REF: The Research Excellence Framework (REF) in the UK is a research evaluation exercise undertaken across HE. A primary function of the REF is to determine how research funding should be distributed across UK institutions on the basis of merit. The last REF ran in 2014. The next REF is expected in either 2020 or 2021.
Research data. A reinterpretable representation of information in a formalized manner suitable for communication, interpretation, or processing. Any information created by and/or used in a research project. This can include:
Research Data Management (RDM). The management of research data; services and policies which support that management; an essential element of good research and the advancement of knowledge; a requirement placed upon researchers and their institutions by RCUKs and other funders.
Scopus: Scopus is a subscription bibliographic database and citation index owned by Elsevier.
Text mining: In bibliometric contexts, text mining refers to computer-automated analysis of the titles, abstracts, and/or full texts of publications. Results of text mining can provide information on frequently occurring words in the texts; uncover patterns, such as words that appear together often; and bring into focus the specific subject areas of a set of texts.
Web of Science: Web of Science is a subscription bibliographic database and citation index owned by Clarivate Analytics. Previously, until its sale in 2016, the database was known as Web of Knowledge and was owned by Thompson Reuters.
Page Owner: cg0016
Page Created: Tuesday 17 October 2017 15:53:47 by rb0043
Last Modified: Friday 14 September 2018 11:11:50 by cg0016
Expiry Date: Thursday 17 January 2019 15:52:23
Assembly date: Tue Jun 11 14:09:13 BST 2019
Content ID: 171971