Incongruent

Awards P2: Exploring Student Satisfaction in Award Programs: A Deep Dive into Measurements, Models, and Impact

October 04, 2023 Stephen King AI
Incongruent
Awards P2: Exploring Student Satisfaction in Award Programs: A Deep Dive into Measurements, Models, and Impact
Show Notes Transcript Chapter Markers

Ever wondered what makes a student tick? What are the factors that lead to a satisfied and motivated learner? Get ready to have these questions answered in our latest episode of the Incongruent podcast with host, Stephen King. We delve into the intricate world of student satisfaction within award programs, dissecting the complex nature of measuring satisfaction and its impact on a student's learning experience. We discuss award events, student satisfaction surveys, and quality models such as SERVQUAL, SERVSERF and HEdSERF, and reveal our proposed model, AWARDSERF.

Stay tuned as we explore the intricate dynamics of student satisfaction based on research findings from India and China, discussing aspects like campus facilities, administrative support, career opportunities, and school reputation. We also tackle the various models and talk about their potential to empower educators and competition organizers. This is not just another podcast episode; this is a journey through the fascinating world of student satisfaction, a deep dive into what makes learning an award-winning experience. So, buckle up and get ready to learn, unlearn, and relearn with us!

Welcome back, dear listeners, to another thrilling episode of The Incongruent podcast, brought to you by Stephen King.  Today we're diving deep into the world of student satisfaction within award programs.

 Chapter 1 - Setting the Stage: Evaluating Awards from an Educational Perspective

 In the initial chapters of this podcast series, we ventured into the realm of award evaluations from a pedagogical perspective. We asked ourselves, "How do award events contribute to a student's learning outcomes?" Today, we're taking it one step further. 

The StratComm event, organized jointly by the University of Johannesburg and Sweden's Lund University, has opened a new chapter for us. We're now on a quest to understand how to normalize awards within teaching programs. Two vital aspects stand before us: student satisfaction and quality assurance.

Our journey begins with a discussion on student satisfaction surveys, inspired by the research of Pardis Rahmatpour and her colleagues in their 2019 Journal of Education and Health Promotion article. Stay tuned because later, we'll delve into major quality models: Servqual, Servperf, and Hedperf. Ultimately, we'll explore what I'm tentatively calling Awardsperf or perhaps HEdperf-Awardsperft.

 Unveiling Student Satisfaction: What Does It Mean?

 Rahmatpour and her team in 2019 provided a plethora of definitions for measuring student satisfaction. One definition reads, "the favorability of a student's subjective evaluation of the various outcomes and experiences associated with education." Another defines it as a "short-term attitude resulting from the evaluation of student  experiences with the education service received."

Now, why does student satisfaction matter, you ask? Well, it's more than just a feeling. It affects student motivation, the recruitment of new students, and the retention of existing ones. High satisfaction is even linked to a student's self-efficacy and their pursuit of individual learning. But as we'll discover, defining and measuring this satisfaction can be a complex endeavor, varying across communities and cultures. 

Initially, I hoped to find a definitive scale in the research, one that could be applied to the awards my students participate in. However, my hopes were dashed when I found that, according to Rahmatpour's team, "no robust and valid single scale for the measurement of student satisfaction" had been identified.

But fear not, dear listeners, for they did provide us with several papers and studies that come close to excellence. Some of them originated from as far away as Iran, India, and Pakistan, which piqued my interest, given my current posting at Middlesex University Dubai. The paper by Chadha (2017) shines as the most current and highly evaluated. We'll be exploring that shortly. Additionally, we'll be comparing it with a Chinese university case, recommended by Rahmatpour's team as potentially the best of the bunch.

Rahmatpour's report also highlighted that the number of items explored in these surveys ranged from a manageable 22 to a comprehensive 92, across 3-11 dimensions. They were neatly categorized into four boxes: curriculum, facilities, campus, and relationship. The questions, on the other hand, often touched on campus facilities, resources, administrative and learning facilities, campus climate, and much more.

From all this, it seems clear that I should aim for a questionnaire on the lower  end of the scale continuum, perhaps examining a maximum of 22 areas within a maximum of three dimensions. But, dear listeners, this is just the beginning. More research lies ahead as we strive to align these findings with our initial rubric.

 Chapter 2 - Evaluating Student Satisfaction Scales: India and China

 In the second installment of this series, we're preparing for a talk at the University of Johannesburg and Lund University's Strategic Communications Conference. This time, we're evaluating two academic papers presenting student satisfaction scales, hailing from the academic realms of India and China.

 The Indian Campus - Insights from Chadha (2017)

 Chadha and colleagues (2017) evaluated a satisfaction survey based on the perceptions of international students in an Indian higher education institute. The survey emerged after a thorough review of existing scales, including Servqual, Servperf, and Hedperf. They crafted their own model, simplifying it before distribution, resulting in 9 thematic areas and around 40 scaled questions.

Beyond the questions themselves, these thematic areas provide a fascinating window into the minds of international students in India. What's important to them? Faculty, administrative support, and campus facilities dominate their satisfaction scores. However, cost, lateness/interruptions, and student conduct rank lower.

Chadha's study also identifies areas of dissatisfaction, revealing issues with campus cleanliness, logistical/course question handling, and administrative services. On the flip side, students are satisfied with affordable prices,  on-time operations, and the expected curriculum. Access to caring and knowledgeable faculty also ranks high.

From an awards perspective, clear information and on-demand support seem vital. Access to knowledgeable experts and adequate resources is essential for the challenge. Cost and conduct also play significant roles, especially in competitions. Facilities, utilities, and safety matter when competitions take place off-site.

For those of you visualizing this, we have a weighted measure for Evident Product awards and another for Evident Process awards, like Hackathons, which often occur outside students' comfort zones. These insights help us consider factors when selecting venues and resources for these events.

 The Chinese Campus - Insights from Chiu et al. (2017)

 In a similar study, Chiu et al. (2017) present a student satisfaction survey from a Chinese sports institute. They explore student opinions across seven areas, including teaching and learning, student management, logistics services, and more. Sixty-seven questions, each measured on a 7-point Likert scale, make up this study.

What's intriguing here is the inclusion of dimensions like school reputation and career development, which could be highly relevant in award programs. Reputation is crucial for an awarding body, and career opportunities are paramount for students.

When we add these dimensions to Chadha's findings, we're looking at a total of five dimensions for Evident Product awards and six for Evident Process awards. This gives us a valuable framework to prioritize areas of focus based on the specific competition involved.

 Chapter 3 - Foundations of Service  Satisfaction Scales

 In today's episode, we take a step back to explore the foundations of service satisfaction scales, following the models recommended by Chadha and her team (2017). Before diving into content analysis, it's crucial to understand the historical context. These documents are dense, so let's start with Servqyal.

Servqual, introduced by Parasuraman, Zeithaml, and Berry in 1988, marks the inaugural attempt to create a scale for evaluating customer satisfaction. Their logical methodology led to a 5-dimensional model with 22 scales.

One intriguing insight from this research is the distinction between mechanistic and humanist quality. The former focuses on tangible benefits, while the latter considers perceptions and cognition. This distinction can have implications for how we assess educational institutions and awarding bodies.

The core aim of Servqual is to evaluate customer satisfaction based on the actual experience, as opposed to what customers expect. Meeting or exceeding expectations can have significant impacts on satisfaction and quality.

The five dimensions identified in Servqual, such as Tangibles and Responsiveness, align with our previous discussions, offering valuable insights into our pursuit of understanding student satisfaction in awards.



 Chapter 4 - The SERFPERF Model: A Deeper Dive

 In this chapter we're delving into the satisfaction quality model known as SERFPERF, a brainchild of Cronin and Taylor, crafted in 1992.

Our mission in this podcast series is to develop satisfaction surveys and evaluation tools, empowering  educators and competition organizers to communicate more effectively about their objectives, obligations, and shared goals. Why is this important? Well, dear listeners, I've noticed a growing trend of events geared towards universities, often timed around corporate or government schedules, without considering the teaching timetable. Yes, we want to participate, but we also need to consider the tangible aspects of these events with a touch of empathy.

 Unveiling Serfperf: A Performance-Based Alternative

 The study by Cronin and Taylor in 1992 addressed some observed weaknesses in the earlier Servqual scale, which we discussed in a previous episode. They particularly challenged the notion that Servqual, based on a lack of empirical research, might be seen as an inference of causality rather than evidence of it.

Cronin and Taylor set out to improve the prior study on two fronts. Firstly, they aimed to test a "performance-based alternative," and secondly, they sought to "examine the relationships between service quality, consumer satisfaction, and purchase intentions." In essence, they aimed to provide strong evidence that managers can measure satisfaction and that there is value in doing so.

 Satisfaction vs. Perception of Service Quality

 One key definition that emerges from Cronin and Taylor's work is the differentiation between satisfaction and the perception of service quality. They highlight that satisfaction relates to the emotions experienced when completing a specific, discrete transaction, while the perception of service quality is a state that develops over time.

It's possible for these two elements to be confused as synonymous,  even though logically they overlap. This error can hinder decision-makers from identifying and rectifying problems efficiently. Satisfaction is typically measured at the point of delivery, and there's a theoretical equation that could be constructed, contrasting the customer's initial expectations or quality perceptions with their actual experience.

However, several factors beyond prior experience with the service provider can influence this schema. External factors, such as reading a negative or positive press article, can imprint upon the consumer's perceptual map. The impact of public relations on service quality becomes crucial here, especially in the case of negative publicity, as we've seen in the Volkswagen case discussed in Li's work (2022).

Additionally, Cronin and Taylor argue for satisfaction to be explored as an attitude that can be influenced by environmental factors as much as deep thought and reflection. They introduce the concept of "adequacy-importance," which helps us understand scenarios where a customer makes a choice based on immediate environmental needs, even if their prior experience suggests otherwise.

For example, when you're hungry and need something quick, you might choose a fast food chain, despite knowing it might not be the healthiest or the best value for money. This choice meets an immediate need, and you're satisfied with it because it fulfills your current priorities.

 Comparing Servqual and Servperf: A Twisting Narrative

Cronin and Taylor conducted a series of statistical examinations comparing and contrasting Servqual and Servperf. It's a bit of a twisty and confusing narrative, but the gist of it is that Servqual appears appropriate in some industries and cases, while Servperf seems  to have wider validity.

They wrap up their study with a mind-bending proposition that has profound implications for our understanding:

Service quality has a significant effect on consumer satisfaction.

Consumer satisfaction has a significant effect on purchase intentions.

Service quality does not have a significant impact on purchase intentions.

 Understanding the Advertising vs. Public Relations Dynamic

 In my world, I understand this in the context of the advertising versus public relations dynamic. Advertising pushes messages about service quality, whereas public relations works through third parties to convey stories of consumer satisfaction. This dynamic can explain the contribution of PR to the marketing mix and why, with significantly smaller budgets, corporate communication managers stand shoulder to shoulder with top-level executives.

In the realm of higher education, satisfaction could be closely linked to a student's decision to continue or complete their studies. The "adequacy-importance" model helps managers identify external risk factors and develop strategies to tilt the scales in favor of students continuing their education. It's about understanding what matters most to students in the moment.

Moreover, the way satisfaction is conveyed through various channels, especially to younger generations, is critically important for the future recruitment of students. If past students had a positive experience, word-of-mouth and shared stories become powerful tools in attracting future students.

In the context of awards, this challenge becomes even more critical. Even if an awarding organization has global connections and influential partners, if the experiences of one group of  students are not positive, future enrollments could be in jeopardy. Monitoring student satisfaction in award competitions is therefore pivotal for their ongoing success.

I can't help but reflect on my own experiences when I organized regular extra-curricular activities during Dubai Lynx Week, inviting past students to share their experiences and organizing guest lectures and mock competitions. Sharing the satisfaction of alumni in this event motivated continued and growing participation in a larger number of events and competitions.

And with that, we wrap up the second programme on industry awards and the intersection with higher education. We've journeyed deeper into the intricacies of student satisfaction within award programs, and our exploration continues. Join us next time as we explore yet another facet of this fascinating topic on The Incongruent podcast. 

Chapter 1 - Setting the Stage: Evaluating Awards from an Educational Perspective
Chapter 2 - Evaluating Student Satisfaction Scales: India and China
Chapter 3 - Foundations of Service Satisfaction Scales
Chapter 4 - The SERFPERF Model: A Deeper Dive