Virtually impossible: limiting Australian children and adolescents daily screen based media use

Participants and settings

The data presented from this cross sectional online survey was obtained from a total
of 2,620 children and adolescents (1,373 males and 1,247 females) from Grade 3 (8 years
of age: 301 males, 303 females), Grade 5 (10 years of age: 346 males, 307 females),
Grade 7 (12/13 years of age: 370 males, 324 females), and Grade 9 (15/16 years of
age: 356 males, 312 females) from 25 randomly selected schools. Of the schools, 14
were state government primary schools (4 rural locations), 6 were state government
high schools (3 rural locations), 1 was a state government District high school (rural
location catering for grades K to 10) and 4 were non-government schools (K – 12).

All participating schools were located across different socio-economic status (SES)
areas as indexed by their Socio-Economic Index for Areas (SEIFA), Australia, 2011
34]. Seven primary schools were in low SES areas, four in mid SES areas, and three were
in high SES areas. Of the six state government high schools three were in low SES
areas and three were in mid SES areas. The District High school was in a low SES area
and of the four non-government high schools three were in high SES areas and one was
in a mid SES area.

In conducting the research we adhered to the STROBE (STrengthening the Reporting of
OBservational studies in Epidemiology) statement for cross sectional and observational
studies.

Instrumentation

The Screen Based Media Use Scale (SBMUS) was specifically developed to measure daily
SBMU. Initially, the instruments and/or items used by researchers in previous research
were reviewed to identify items for possible inclusion in the SBMUS. It was evident
that most studies relied on either one item or two items to collect data on “total
time spent on screens” and that these data related to TV watching, or computer use, or video game playing, or a combination of these. We sought information from young people
about the different types of screens being utilised, by whom (i.e., according to sex
and age) and for what purpose. Therefore, our instrument comprised the following sections
and format: (i) Demographics; A brief section seeking information about sex, date
of birth, age, and school grade level. (ii) Screen types; the following text was prominently
displayed “Screens can mean anything that shows a picture that you watch or interact with. Below
are some pictures of screens you may use. These include an iPod Touch, iPad, Mobile
Phone or iPhone, TV, Laptop Computer, Portable PlayStation or an Xbox.
(Images of these screens were then presented.) Examples of things you can do on screens are watch TV, search the internet, use social
networking sites, use instant messenger, send and receive emails, play games, online
shopping, download music, do school work and homework, and watch music videos”.
The images of the eight screen based media were then presented again and participants
were requested to place a check in the box of any that they had used in the last seven
days.

An interactive slide bar that measured SBMU in hours and minutes was then presented
and participants were asked to “think aboutONEtypical day last week (Monday to Friday). How manyHOURSin total did you spend onALLscreens that DAY? Start from the timeyou woke upand think about thetotal number of hours, including before school, during school, after school, at home or at a friend’s house,
and in the evening until the time
you went to bed. (Emphases shown are as displayed on the online SBMUS.) Employing a sub-sample of
174 young people, we were able to assess test-retest reliability of this measure across
a six month period. Overall reliability was good (r?=?.50, N?=?174) and this did not differ by gender (rboys?=?.51, n?=?91; rgirls?=?.53, n?=?82). Reliability varied somewhat across Grades 3 (r?=?.49, n?=?33), 5 (r?=?.60, n?=?44), and 7 (r?=?.52, n?=?51). However, test-retest reliability was most problematic amongst the oldest group,
those in Grade 9 (r?=?.19, n?=?46). However, the young people in Grade 9 were taking examinations during the second
point in time and we believe that this disrupted the stability of the measure.

(iii) Screen activities; A list of 20 screen based activities (e.g., used Google,
Twitter, played MMORPG, online shopping, used the web for research, watched/listened
to videos/music) along with illustrated images were then presented and participants
were asked to place a check in the box of any they had participated in over the past
seven days.

Four separate sections (each with definitions and an illustrated image of what the
section referred to) on gaming, social networking and instant messenger, TV/Videos/Music,
and searching the web were then presented. Each of the four sections required participants
to use an interactive slide bar to estimate their SBMU in hours and minutes [as in
(i) above]. The same sub-sample of young people provided test-retest data for each
of these four measures. Overall, test-retest reliabilities were good, ranging from
.46 (Web use) to .53 (Gaming) for the sub-sample as a whole (N?=?174). Test-retest reliability was higher for girls on Gaming (rgirls?=?.65, n?=?68; rboys?=?.49, n?=?78), Social networking (rgirls?=?.74, n?=?77; rboys?=?.22, n?=?84) and Web use (rgirls?=?.52, n?=?77; rboys?=?.35, n?=?87), but was higher for boys on TV/Videos/Music (rboys?=?.58, n?=?89; rgirls?=?.53, n?=?77). Across Grades 5 and 7, test-retest figures for all four measures ranged from
.51 to .69. At Grade 3, reliabilities were .41 or .42 except for Social networking
which was very poor (r?=?.08, n?=?26). At Grade 9, reliabilities were also good for three of the measures (rgaming?=?.50, n?=?37; rsocial networking?=?.80, n?=?45; rtv?=?.53, n?=?45) but the fourth, Web use, was weaker (r?=?.22, n?=?45). This may also reflect the same issue as noted previously, that the oldest
children were engaged in exams during the later data collection and Web use is likely
to be one of the primary uses of screen when at school. This approach allowed an overall
time estimate and then a time estimate of each of the four activity types. This also
meant that we could consider the prevalence of adolescents exceeding the??2 hours
recommendation and to what degree.

To test the instrument 20 young people aged 8–16 years were provided with unique log
in codes and asked to comment on face and content appropriateness of the instrument,
its ease of use, interactivity and engagement. With the exception of a slight modification
to the interactive slide bars (modified to show amount of SBMU time in numbers as
the bars slide) to estimate the amount of time spent using screen based media, all
feedback was positive. On average participants completed the SBMUS in 25 minutes.

Procedure

Permission to conduct the research was initially obtained from the Human Research
Ethics Committees of the University of Western Australia and the State Department
of Education. Following this 30 schools were randomly selected from a mix of socioeconomic
and metropolitan/rural areas and their Principals were contacted to ascertain their
interest in participating. The 25 who expressed an interest subsequently received
information sheets explaining the research, along with a follow up phone call to answer
any questions and to finalise their involvement. As this current research is part
of a three year longitudinal study using an accelerated cohort sequential design,
information sheets and consent forms (for active informed consent from parents) were
sent to the parents of children in primary school Grades 3, 5, and 7 and adolescents
in high school Grade 9. The sample of 2,620 students represented an affirmative return
rate of approximately 78%.

The SBMUS was administered to the participants in groups of 15–25 students during
a four week period when the electronic link remained open. All participants were provided
with a unique code that allowed them to access the online version of the SBMUS to
complete in confidence. Administration in each school was supervised by a member of
staff who received a written set of instructions to ensure standardization of administration
and to address any technical difficulties should they arise.

Statistical analyses

Our analyses included four sections. The first section included descriptive data.
In the second section, our aim was to examine the possible association of Sex (male,
female) and Grade (3, 5, 7, 9) with participants’ overall assessment of their weekday
SBMU (2 hours, 2 hours). We achieved this by conducting a hierarchical linear model,
using backwards elimination of effects. Where there were significant effects, these
were interrogated by calculating relative odds ratios (Field, 2009).

In the third section, our aim was to investigate the data in more detail. The same
hierarchical linear model was used but we replaced ‘overall assessment of SBMU’ with
the estimated screen time for each of four different forms of SBMU: Gaming, Social
Networking, Web use, and TV/DVD/Movies. Thus, for example, the first hierarchical
linear model included Sex, Grade (3, 5, 7, 9), and participants’ assessment of their
weekday SBMU for Gaming (2 hours, 2 hours).

In the fourth and final section, our goal was to examine the extent to which young
people reported exceeding SBMU recommendations on different forms of screen use (Gaming,
Social Networking, Web use, and TV/DVD/Movies). To this end, we created a variable
which reflected the number of screen activities which participants reported engaging
in on weekdays for more than 2 hours. This measure could range from 0 (no screen activities
exceeding 2 hours) to 4 (use of all four screen activities exceeded 2 hours). We then
examined whether this variable differed as a consequence of Sex and Grade (3, 5, 7,
9) by conducting a two-way independent ANOVA. A significant interaction was interrogated
by conducting four independent t-tests comparing boys’ and girls’ scores at each grade level. These t-tests were Bonferroni corrected (.05/4?=?.013).

Ethical standards

The authors assert that all procedures contributing to this work comply with the ethical
standards of the relevant national and institutional committees on human experimentation,
the American Psychological Association and with the Helsinki Declaration of 1975,
as revised in 2008.