An update on our commitment to the Black Community and anti-racism >>

NACE Logo NACE Center Logo
National Association of Colleges and Employers NACE Center for Career Development and Talent Acquisition®
mobile menu
  • The Communication Competency: Exploring Student Intern and Employer Communication Differences

    February 01, 2021 | By Aaron James and Troy Nunamaker

    Speech bubbles and the word

    TAGS: internships, competencies, career readiness, journal

    NACE Journal, February 2021

    The Importance of Written Communication Skills

    Several national organizations, including the National Association of Colleges and Employers (NACE), the Council for Industry and Higher Education, and Adecco, have published reports discussing the demand for increased career readiness among recent college graduates, along with a possible shortage of recent college graduates who are prepared to enter the workforce.1, 2, 3 Career competencies, also known as soft skills, are a vital component of a successful transition into the workforce; however, there remains a gap in how students and employers rate proficiency levels in these competencies.4 Oral/written communication skills, along with the leadership and professionalism/work ethic competencies, are the three main areas where students and employers most radically differ when evaluating proficiency.5 Communication skills are highly valued under normal circumstances, but they take on new importance in a time of Zoom meetings, distance learning, social distancing.

    As outlined in a previous NACE article analyzing the leadership competency, a likely first step in responding to the competency gap is defining career readiness and the competencies associated with career readiness; in this case, the specific competency is communication.6 NACE’s definition of the communication competency states that an individual should be able to “articulate thoughts and ideas clearly and effectively in written and oral forms to persons inside and outside of the organization. The individual has public speaking skills; is able to express ideas to others, and can write/edit memos, letters, and complex technical reports clearly and effectively.”7 According to this definition, communication is both expressive and technical; expressive in that ideas and thoughts are articulated in such a way as to be comprehensible by others, and technical in that the presentation of those ideas and thoughts are structured in a particular manner, depending on the medium. While this definition covers both oral and written communication, our study focuses on the latter.

    It is important to determine if students have an accurate view of their ability to communicate via writing, especially in a world of social distancing, remote work, and virtual education. While in-person or face-to-face communication provides a speaker with the opportunity to modulate their message to the audience, written communication must maintain the appropriate tone while still conveying information; a written message cannot be adjusted mid-delivery in response to audience feedback. Written communication also requires correct grammar, punctuation, and structure. Errors of this kind can lead to an intelligent, talented, and eloquent speaker coming across as boorish, inane, or unintelligible in writing.

    This study has several interesting implications for other competencies described in the competency gap. The gap between the self-assessment of recent graduates and employers' evaluations may not just be the result of overconfidence, but rather a misunderstanding of how the skills described by a competency are exercised in the workforce. Students may rely solely on their own experiences when evaluating their proficiency in a given competency. Importantly, they will rely on readily available metrics to evaluate their proficiency; if academic essays are the only or most common form of written communication that a student has received explicit feedback on, their ability to write a good academic essay will frame their self-assessment. Employers do not evaluate competencies in the abstract as standalone skills; they desire applicants who can express their skills in a specific disciplinary context.8

    Written communications leave behind a tangible record that can be assessed later. In the interest of impartially evaluating written communication skills, we used the Grammarly app to evaluate each sample individually. Grammarly has already seen success in improving the writing of graduate students.9 It is accessible for the general population, with both the paid and free versions being straightforward to use, simplifying both our initial testing and the replication of our results. In addition, Grammarly categorizes issues and errors based on a number of dimensions, making quantification and subsequent comparisons straightforward. These dimensions include correctness, clarity, engagement, and delivery.10, 11, 12

    Students intending to enter the workforce after graduation will need some degree of communication skills in order to be successful. As the internet binds the world ever closer, written communication skills, in particular, become more important, a reality further driven home by our current pandemic. The importance of written communication skills predates COVID-19 and will outlast it. While some meetings, whether online or in person, will be conducted face to face; the memos, emails, manuals, procedures, reports, forecasts, plans, forms, disclosures, invoices, case files, and quarterly reviews will continue to serve as grist for the business world's mill. Each of these examples demands not only a certain level of writing proficiency but also the writing skills that are appropriate for the field, type of document, and intended audience. However, before students can apply themselves to learning new writing skills, they must understand the skills that they already possess. It is therefore important to determine if students are accurately self-assessing their communication skills. While major- or career-specific skill evaluations would likely be most helpful, the metrics scored by Grammarly represent a basic technical mastery that is useful across disciplines. If students are not accurately self-assessing, then determining if there are any specific areas where students overrate their abilities would also be valuable.


    For tangible records of written communication, we leveraged Clemson University’s internship final evaluation in this study. Clemson University’s Center for Career and Professional Development has used the same final evaluation for its zero-credit-hour internship course since the fall 2017 semester. This final evaluation asks competency-focused questions about the student interns’ experiences and career development.

    Student interns and mentors are asked to rate the student interns’ proficiency level in each of NACE’s competencies, where the proficiency levels are awareness, basic, intermediate, advanced, and expert. The proficiency levels reflect a student’s knowledge, experience, and application of a given competency. The Clemson Center for Career and Professional Development defines each proficiency level as follows: awareness reflects theoretical knowledge, basic reflects limited experience, intermediate reflects practical experience, advanced reflects extensive experience and application, and expert indicates mastery and attainment in all areas. While there are minor adjustments to language, both student interns and mentors are asked in an open-ended question to explain why they chose each proficiency rating for each competency; this is the question that we used in our research. Students are first asked to rate their competency level, and then they are asked to describe an experience from their internship that supports this rating. Mentors are asked to list the student's strongest and weakest competencies, as well as why they selected those competencies. The open-ended responses to the communication competency and the matching self-assessed proficiency rating were sanitized of identifying information and exported to an Excel spreadsheet. From this sheet, the responses were then categorized based on length, using Excel’s built-in sorting tool. In keeping with narrative analysis best practices, we focused on lengthier responses: A minimum response length of 30 words is suggested; each sample we analyzed is at least 60 words.13

    Errors in correctness were the most common error for each student group, followed by clarity. Errors in delivery were the least common error.

    To better evaluate gaps in self-assessment, 15 samples were chosen from the awareness, intermediate, and expert competency levels, for a total of 45 samples, allowing qualitative comparisons between student interns who graded themselves at the lowest, middle, and highest competency ratings. We did not analyze the basic or advanced groups; we hypothesized that the awareness and expert groups would show the widest discrepancies in the categories measured, and included the intermediate level as a control group laying in between the two extremes. However, as we will detail below, that particular hypothesis does not appear to be the case. Future studies should include the basic and advanced groups.

    Fifteen samples were also chosen from mentor responses. Analyzing mentor responses allowed for a number of interesting comparisons. Comparisons can be made not only between mentors and the average score of the awareness, intermediate, and expert student groups, but also between mentors and specific student groups. Each sample was then individually copied and pasted into Grammarly’s editing software. As previously mentioned, Grammarly's assessment provided alerts for clarity, engagement, delivery, and correctness, as well as the total number of alerts and an overall score. “Clarity” reflects how easy a sample is to follow or understand, “engagement” determines if a sample is tedious to read, “delivery” assesses the tone of the sample, and “correctness” indicates errors in spelling, punctuation, or grammar.14, 15, 16

    The results for each individual were then transcribed into that sample’s corresponding Excel entry. After the results for all the samples for the given communication competency level were transcribed, the average for each category was found using Excel’s formula function, simplifying direct comparisons between different competency levels and mentor results. Scores for each category were rounded to the nearest tenth. The word count of each sample was also entered into Excel, providing another metric of comparison. To improve the quality of these comparisons, the average word count and average scores for each category were used to give an average score per word. This metric allows longer responses and shorter responses to be differentiated, even if they possessed an equal number of alerts.


    The student group with the self-assessed lowest communication competency level—awareness—averaged an overall score of 81.4 with 8.3 alerts. (See Figure 1.) The intermediate group averaged 85.2 with 9.6 alerts, and the group that expressed the highest confidence in their communications skills—expert—averaged 82.8 overall with 13 errors.

    Figure 1

    The average mentor score was 92.8, with 3.1 errors. The intermediate group scored higher than the expert group in every category measured, with the exception of average word count; the intermediate group averaged 205 words to the expert group’s 254. The awareness group averaged a word count of 142, while the mentor group averaged 85.

    As seen in Figure 2, errors in correctness were the most common error for each student group, with the awareness group averaging 4.4, the intermediate group averaging 4, and the expert group averaging 5.3. Even controlling for word count, the intermediate group outscored the expert group, having fewer average correctness errors.

    Figure 2

    The next most common type of error among students was clarity. The awareness group had the fewest average clarity errors, with 1.9. The intermediate group averaged 3, and the expert group averaged 3.9. Errors of engagement averaged 1.6 for the awareness group, 2.3 for the intermediate group, and 3.1 for the expert group. As with clarity, the awareness group outscored both the intermediate and expert groups, with the intermediate group outscoring the expert group.

    Errors of delivery were the least common among student groups: The awareness group averaged 0.4, the intermediate group averaged 0.3, and the expert group averaged 0.9. The intermediate group had the fewest average delivery errors, with the awareness group close behind; both groups handily outscored the expert group. Even after adjusting for word count, these scoring trends hold true.

    As was true of the student groups, the most common error among the mentor responses were errors of correctness, with the mentors averaging 1.7. The mentor group also averaged 0.7 clarity errors, 0.7 engagement errors, and 0.1 delivery errors. The mentor group had the highest average score, the fewest average errors, and the lowest average word count among the groups tested. As with the student groups, these trends hold true even after adjusting for word count. For example, the average score of all student groups was 83.1, nearly 10 points lower than the mentor score. Students averaged more than triple the errors of mentors, with 10.3 compared to 3.1. Errors of correctness averaged 4.6, with 2.9 clarity errors, 2.3 engagement errors, and 0.5 delivery errors outstripping the mentor scores in each error category. (See Figure 3.)

    Figure 3

    Despite being useful for comparisons, the average scores obfuscate some of the variances found in each group, at least to some extent. The awareness group has the widest range between lowest and highest individual scores, for example, with a low score of 43 and a high score of 98. Out of a sample of 15 responses, the awareness group had seven responses that scored 90 or above, one response that scored between 80 and 89, five responses that scored between 70 and 79, and two responses that scored between 40 and 49—the lowest scores in any group.

    The respective lowest and highest individual scores for the intermediate group were 66 and 99, with four responses scoring 90 or above, eight responses scoring between 80 and 89, two responses scoring between 70 and 79, and one response scoring between 60 and 69.

    The expert group’s lowest and highest scores were 62 and 96. Similar to the intermediate group, the expert group had four responses that scored 90 or higher, six responses scoring between 80 and 89, three responses scoring between 70 and 79, and two responses scoring between 60 and 69.

    The range in the scores of the mentor responses is much narrower. The lowest score for the mentor group was 82, while the highest score was 99. The mentor group had 12 responses that scored 90 or higher, and three responses that scored between 80 and 89.

    Implications of the Results

    There are a number of interesting explanations for these results. In particular, we feel that the average scores among student groups warrant scrutiny.

    As mentioned previously, communication is one of the top three skills missing in the competency gap. Before a student or recent graduate can begin to improve any skill, they must be aware that they need improvement and be able to accurately self-assess their proficiency. Returning to the data, we found that the expert group scored only slightly higher than the awareness group, a group made up of students who rate themselves as having only a theoretical knowledge of the competency. Why did these groups receive such similar scores but rate themselves so differently?


    One possibility is that the expert group was overconfident in assessing their communication proficiency, perhaps giving more weight to communication skills beyond written communication. Students could be focusing on their ability to deliver a presentation, participate in an effective dialogue, or instruct others how to accomplish a task. The nature of the internship course's final evaluation would likely undercut this theory, however. A rubric listing indicators for each proficiency level is included in the question, and one of the indicators for expert refers to "mastery and attainment in all areas."17 It is possible, of course, that some of the students who rated themselves as experts either ignored the grading criteria or did not consider writing skills when asked to conceptualize their communication proficiency. Regardless, the student has overrated their communication proficiency; ignoring part of the evaluation, itself a form of written communication, demonstrates a failure to engage in communication, while a misunderstanding of the rubric demonstrates a less-than-expert knowledge of the subject matter.

    Students see some connection between the communication competency and their ability to write at length about a given topic.

    It is important to note that neither of the possibilities indicates that the students in the expert group are not competent communicators, merely that they have expressed a higher level of confidence in their abilities than they possess, at least as implied by our data. While it is certainly possible that this overconfidence is rooted in hubris, there are other, more benign possibilities. Students may be evaluating their communication proficiency based on their experiences in an academic environment, using techniques tailored to both that environment and their generation. Nevertheless, the communication competency encompasses a wide range of career fields and age groups, and students may be overestimating the degree to which their current communication skills will translate into the workplace. The ability to give a clear presentation or train a new hire should not be underrated, of course, but the ability to write a clear and concise memo or email should likewise not be neglected. More research into the causes of this overrating is undoubtedly warranted.


    Another possible explanation for the minor difference in the average scores between the expert and awareness groups is our previous theory's inverse: Some of the students in the awareness group underrated their proficiency level. The distribution and range of scores in the awareness group supports this hypothesis. While the awareness group had the lowest scores of any group, it also had the most responses that scored 90 or above in any student group. The low scores suggest that some students accurately assessed that their communication skills could use some improvement, but the sizeable difference between the lowest and highest scores, coupled with the relatively common number of high scoring responses, suggests that some students are actually much better at written communication than they think they are. Like their peers in the expert group, they may be undervaluing written communication skills. It is likely that they also lack the experience necessary to accurately evaluate and frame their proficiency with written communication and the importance of those skills in the workplace.

    Quantitative differences between mentor and student groups

    The quantitative differences in the data between mentor and student responses support this theory. In the student groups, the average word count correlates with proficiency level. Students who rated themselves as expert wrote, on average, the longest responses. Likewise, the intermediate group wrote the next longest entries, and the awareness group averaged the shortest word count among the student responses. However, as previously discussed, the mentor responses averaged both the highest score and shortest word count. When asked to describe a student’s communication skills, the mentor responses were direct, relatively short, and largely error-free. It should be noted that the prompts for the mentor and students groups are slightly different, and further research could examine this trend in greater detail. However, the trend toward longer responses among student groups, when taken with the uniformity of student questions, does seem to imply that students see some connection between the communication competency and their ability to write at length about a given topic. Further research could determine if this link is causal at the point of the final exam (students who rate themselves highly then write a lengthy response to justify that rating) or over the long term (students who are more comfortable writing rating themselves highly).

    The trends of shorter, more grammatically correct responses from mentors and longer, less correct responses from students line up with the broader divergence between academic and workplace writing. The students are responding to the final exam for the internship course, circumstances which understandably prompt a response in the vein of the academic writing with which they are familiar. The relatively lengthy responses are consistent with a desire to demonstrate whatever knowledge they think is pertinent to the prompt, while the high number of errors of correctness are the type of mistake that would likely be addressed in subsequent drafts. On the other hand, mentors are asked to select their intern's strongest and weakest competency, a task similar to conducting a performance review. The shorter, but mostly error-free, responses are clear, concise, and well-organized writings from full-time employees: quintessential business writing.

    The intermediate group: Most accurate self-assessment?

    The data for the intermediate group is also interesting. The average length for the intermediate group was 205 words; longer than the awareness group, but shorter than the expert group. However, when controlling for word count, the intermediate entries had the fewest errors among the student groups (the fewest average errors per word). Further research would be necessary to draw definitive conclusions, of course, but it is possible that the intermediate group more accurately assessed their proficiency level, at least on average.

    One more point of comparison that bears mentioning: The most common error among each group was in the category of correctness, but, after controlling for length, errors of correctness were least common in the mentor and intermediate groups, with the average correctness error per word being close enough to be statistically insignificant (respectively 0.019547 and 0.019512 errors per word, far too many significant digits for this study). Grammarly uses the correctness category to note spelling and grammar mistakes, i.e., technical errors in writing. Future research could focus on technical errors of this kind, how common they are, and among what groups. However, the limited number of errors of correctness per word could imply a more businesslike approach to writing, where the main points are presented quickly, efficiently, and with correct grammar and spelling. Alternatively, the intermediate group may be generally good or well-practiced writers; while academic writing is more tolerant of spelling or grammatical errors in an early draft or on an exam, such errors are usually discouraged.

    Limitations of the Study

    The structure of this study necessarily imposes limitations, and those limitations should be made explicit. As mentioned previously, the communication competency encompasses many forms of communication, with written communication skills only making up a fraction of the competency. This study only examines the quality of written communication, and while it provides some insight into the different ways that students and mentors write about communication skills, it neither attempts to nor fully captures the complete spectrum of communication abilities. As anyone who has conducted a job interview knows, writing ability may be an important part of a potential employee's skill set, but interpersonal communication skills are also essential.

    Sample selection is also a limiting factor. As with any study that relies on sample groups, there is a possibility that the sample is not representative of the population at large. We selected responses based on length to ensure we had passages long enough to maintain narrative analysis best practices, meaning that our samples for each group draw from the lengthiest responses.

    Finally, our usage of Grammarly scores shaped the way that the data were evaluated and interpreted. Grammarly was a convenient tool in making qualitative comparisons, but, like any other machine learning algorithm, its reasoning and methods for scoring can be difficult to parse. There may be biases in Grammarly’s scoring that we did not detect, but it was a useful method of evaluation, especially for a preliminary study with a small sample size.

    Roots of the Competency Gap

    Each of the above limitations represents valid concerns; there may be others that were missed. Either way, more research into this area could easily address these concerns. Given the importance of clear communication between employers and recent graduates, more research is almost certainly warranted. That being said, this particular study seems to imply that the communication competency gap is rooted in different understandings of what it means to communicate well, at least in writing. Student responses tended to resemble academic writing as opposed to business writing, a result that makes sense given that the respondents are still students and the prompts are presented as a final exam. If a recent graduate’s first experience with workplace writing is after they have entered the workforce, however, they may be in for a rude awakening and a rushed learning period.

    On a more positive note, students who do well with academic writing are likely to be able to transition to skillful workplace writing with some assistance. The critical thinking, creativity, intercultural fluency, and organizational skills that academic writing assignments seek to encourage are all transferable to workplace writing; the pieces are the same—they just fit together differently in the workplace. These attributes additionally bleed over into other competencies, strengthening other skill sets and building a well-rounded individual.

    This overlap may explain other employer perceptions of recent graduates. While a competency gap also exists in the critical thinking/problem solving category, more employers rate recent graduates as proficient critical thinkers than as proficient communicators.18 While exposing students to business writing earlier will likely improve their business writing skills, it may also be beneficial for employers to have a better understanding of the potential root causes of the gap and take steps to help bridge it. Academic writing provides a wide variety of tools that are applicable in many different fields, but the necessarily focused nature of businesses may leave a recent graduate unsure of which tools are appropriate for the task at hand. Taking the time to teach and assist a new hire will hasten their integration with the team.

    This study seems to imply that the communication competency gap is rooted in different understandings of what it means to communicate well, at least in writing.

    One possible cause for the competency gap in the communication competency could be attributed to this difficulty in written communication. Recent graduates may be overconfident in their ability to communicate in writing; alternatively, there could be some deeper misunderstanding of what constitutes good writing skills. While there are some overlaps in the skills needed for written communication in, say, an academic setting versus a workplace setting, each setting has different standards for what it considers to be good communication. In addition to the differences in standards between these broader settings, different areas of study or career fields will also have varying standards for written communication skills. Just as a philosophy term paper will differ from a microbiology lab report, the written communications of an engineering firm will differ from those of a law office.

    While the wide range of standards within either academic disciplines or career fields makes codifying specific differences challenging, academic and business writing diverge in many notable ways. The purpose behind writing is one such divergence. In an academic setting, writing has a pedagogical role: Students write to learn.19 An essay provides a student the opportunity to explore and deepen their understanding of the subject matter, to express their personal views, and to demonstrate their ability to think critically.20 On the other hand, business writing is intended to make something happen; solving problems, storing vital information, or proposing a new course of action takes priority over demonstrating the author's knowledge.21 Workplace writing must be “clear, complete, coherent, concise, and compelling” in order to inspire the desired action, but it must also be “well organized and visually effective” so as to encourage a timely response.22 As opposed to an academic setting, time is of the essence in the workplace; business communication can seldom afford the luxury of multiple drafts. Employers take mastery of grammar for granted, expecting employees to know how to write well with little supervision and to “get it right” the first time.23 Further research into the differences between how communication skills are conceptualized in different settings would go a long way in explaining the competency gap. In addition, exploring the remaining seven career competencies in a similar fashion might reduce some of the other gaps associated with career readiness.


    1 National Association of Colleges and Employers (2016). Career readiness defined. Retrieved from

    2 Adecco. (2019). Skills gap in the American workforce. Retrieved from

    3 Archer, W., & Davison, J. (2008). Graduate employability. The Council for Industry and Higher Education. Retrieved from

    4 Archer, W., & Davison, J. (2008). Graduate employability. The Council for Industry and Higher Education. Retrieved from

    5 National Association of Colleges and Employers. (2018). Are your students career ready?Retrieved from

    6 Nunamaker, T., Cawthon, T., & James, A. (2020, May 1). The leadership competency: How interns and employers view development. NACE Journal. Retrieved from

    7 National Association of Colleges and Employers (2016).

    8 Hora, M. (2017, February 1). Beyond the skills gap. NACE Journal. Retrieved from

    9 Ventayen, R., & Ventayen, C. (2018). Graduate students’ perspective on the usability of Grammarly in one ASEAN state university. Asian ESP Journal, 14(17.2). Retrieved from

    10 Grammarly. (2020). Bringing clarity to everything you write. Retrieved from

    11 Grammarly. (2020). How Grammarly can make your writing more engaging. Retrieved from

    12 Grammarly. (2020. How delivery can affect your writing. Retrieved from

    13 Douglas, J., Douglas, A., McClelland, R., & Davies, J. (2015). Understanding student satisfaction and dissatisfaction: An interpretive study in the UK higher education context. Studies in Higher Education, 40(2), 329-349.

    14 Grammarly. (2020). Bringing clarity to everything you write. Retrieved from

    15 Grammarly. (2020). How Grammarly can make your writing more engaging. Retrieved from

    16 Grammarly. (2020). How delivery can affect your writing. Retrieved from

    17 Clemson University Center for Career and Professional Development (2020). Core competencies; Communication. Retrieved from

    18 National Association of Colleges and Employers (2018). Are your students career ready? Retrieved from

    19 Darkwing (2020). Differences between academic and business writing. University of Oregon.

    20 Vásquez, F. (2013) Differences between academic and business writing, Global Business Languages, Vol. 18, Article 8. Available at

    21 Darkwing. (2020).

    22 Vásquez, F. (2013).

    23 Ibid.

    Aaron JamesAaron James is a student assistant at Clemson’s Center for Career and Professional Development. He is currently pursuing a B.A. in history and serving as an intern researching career competencies among students. After graduating, he plans to continue his education and pursue a career where he can continue to serve in higher education.

    Troy NunamakerTroy Nunamaker, Ph.D., serves as the chief solutions officer for Clemson University’s nationally ranked career services. Dr. Nunamaker earned a Ph.D. as well as an M.Ed. and an M.H.R.D. from Clemson and a B.A. from Wittenberg University. His professional responsibilities have ranged from cultivating department-level and division-level corporate partnerships and managing on-campus, off-campus, and international internship programs to providing external review and consulting services and developing new strategies for keeping career services relevant.