"The purpose of this consultation is to update the DCMS Creative Industries classification and we are inviting input from interested parties. We have been engaging with industry and partner organisations over potential changes via a Technical Working Group of the Creative Industries Council and are now at a point where we would like to go out to consultation and seek wider views.
We have been working with partners (NESTA, Creative Skillset and Creative and Cultural Skills), to review and update the classification used in the DCMS Creative Industries Economic Estimates (CIEE). We intend to use this review 'Classifying and Measuring the Creative Industries', referenced below, as an objective starting point to suggest which occupations and industries should be included in the updated DCMS classification.
The review uses the idea of 'creative intensity' (the proportion of people doing creative jobs within each industry) to suggest which industries should be included. If the proportion of people doing creative jobs in a particular industry is substantial, above a 30% threshold, the industries are candidates for inclusion within the Creative Industries classification.
Similar to the outlook in our current Creative Industries Economic Estimates, the 'creative intensity' approach focuses on industries where the creative activity happens. The intention is to produce a classification which provides direct estimates of employment and the contribution to the economy, with no double counting - rather than attempting to capture all activity further down the value chain, for example, retail activities. The classification generated in this way can be used as a starting point for indirect estimates which include wider economic effects along the supply chain.
Any approach has data and methods constraints, which may affect some industries more than others. These limitations are reflected in the consultation and consultees are invited to suggest alternatives, supported by evidence-based argument. Weaknesses in the underlying classifications and data used to construct these estimates, which are identified by users, will be fed-back to the organisations which set these standards and provide these data so that we can influence longer-term improvements."
(Department for Culture, Media & Sport, 19 April 2013)
HOLLYWOOD'S golden age may have ended in the 1950s, but it is only recently that Tinseltown appears to have hit upon a mathematical way to capitalise on our fickle attention spans.
"Film-makers have got better and better at constructing shots so that their lengths grab our attention," says James Cutting, a psychologist at Cornell University in Ithaca, New York. He analysed 150 Hollywood movies and found that the more recent they were, the more closely their shot lengths tended to follow a mathematical pattern that also describes human attention spans.
In the 1990s, a team at the University of Texas, Austin, measured the attention spans of volunteers as they performed hundreds of consecutive trials. When they turned these measurements into a series of waves using a mathematical trick called a Fourier transform, the waves increased in magnitude as their frequency decreased.
(Ewen Callaway, 18 February 2010, New Scientist)
"As a young academic, I am reliably informed that the landscape of scholarly communication is not what it was 20 years ago. But, despite all that has changed, it seems that we still largely rely upon the same tired and narrow measures of quality and academic impact - namely, citation counts and journal impact factors.
As someone who has used the internet in almost every aspect of their academic work to date, it's hard for me to ignore the fact that these mechanisms, in predating the web, largely ignore its effects.
By holding up these measures as incentives, we appear to have our eye firmly fixed on the hammer and not the nail, adjusting our research habits in order to maximise scores and ignoring issues such as why we publish in the first place."
(Matthew Gamble, 28 July 2011, Times Higher Education)
"Norm and criterion referenced assessment are two distinctly different methods of awarding grades that express quite different values about teaching, learning and student achievement. Norm referenced assessment, or 'grading on the curve' as it is commonly known, places groups of students into predetermined bands of achievements. Students compete for limited numbers of grades within these bands which range between fail and excellence. This form of grading speaks to traditional and rather antiquated notions of 'academic rigour' and 'maintaining standards'. It says very little about the nature or quality of teaching and learning, or the learning outcomes of students. Grading is formulaic and the procedure for calculating a final grade is largely invisible to students.
Criterion referenced assessment has been widely adopted in recent times because it seeks a fairer and more accountable assessment regime than norm referencing. Students are measured against identified standards of achievement rather than being ranked against each other. In criterion referenced assessment the quality of achievement is not dependent on how well others in the cohort have performed, but on how well the individual student has performed as measured against specific criteria and standards. Underlying this grading scheme is a concern for accountability regarding the qualities and achievements of students, transparency and negotiability in the process by which grades are awarded, an acknowledgement of subjectivity and the exercise of professional judgement in marking."
(Lee Dunn, Sharon Parry and Chris Morgan, 2002)
"Currently, our best theories are limited in terms of their applicability to design. However, we cannot retreat into the easy empiricism of current usability perspectives where everything is measured in terms of effectiveness, efficiency and satisfaction. Theory building must occur if we are to have long term impact and the diversity of experiences users can have with technology are not simply reduced to these operational criteria. We need to stretch our conception of interaction beyond performance and simple likes/dislikes. I argue for a richer sense of user experience, one that allows for aesthetics as much as efficiency and the creation of community discourse forms over time as much as the measurement of effectiveness in a single task. There is much work ahead but unless we embrace these issues as part of our research agenda, then the study of HCI will forever be piecemeal and weak, and its results will find little positive reception among the many designers and consumers who could most benefit from them."
Dillon, A. (2001) Beyond usability: process, outcome and affect in human-computer interactions. Canadian Journal of Library and Information Science, 26(4), 57-69.
[Dillon argues for a richer sense of what constitutes web usability and resists the easy empiricism espoused by most usability engineers.]