Comparing the educational quality of free flap technique videos on public and paid platforms

Surgical videos are reshaping the landscape for surgical education. As this form of education has rapidly grown and become a valuable resource for experienced surgeons, residents, and students, there is great variability in the presentation of what is offered. This study aimed to assess and compare the educational quality of free flap instructional videos on public and paid platforms.

13-18 high).Professionally-made videos were identified per lighting, positioning, and video/imaging quality.Interrater reliability between the three reviewers was calculated.
The educational quality of the videos was compared between public and paid sources using Mood's median test.Pearson's correlation coefficient was utilized to assess the correlation between video length and educational quality.
Results: Seventy-six videos were included (40 public, 36 paid).The median video lengths for public and paid platforms were 9.43(IQR = 12.33) and 5.07(IQR = 6.4) min, respectively.There were 18 high, 16 medium, and 6 low-quality public videos, versus 13 high, 21 medium, and 2 low-quality paid videos.Four public and seven paid videos were identified as professionally made.Interrater reliability was high (α = .9).
No differences in educational quality were identified between public and paid platforms.Video length was not correlated with quality (p = .15).A video library compiling public high-quality videos was created (https://www.youtube.com/playlist?list= PL-d5BBgQF75VWSkbvEq6mfYI--9579oPK).
Conclusions: Public and paid platforms may provide similar surgical education on free tissue transfer.Therefore, whether to subscribe to a paid video platform for supplemental free flap education should be determined on an individual basis.
The rapid advancement of internet engagement has led to its emergence as a resource for education.Because of its ability to offer information in various forms such as videos, articles, interactive websites, and more, the internet has great potential in delivering education that is specific to the medical field (Choules, 2007).Widely available and easily accessible online surgical videos have gained favor as resources for surgical training, reshaping the landscape for surgical education (Mota et al., 2018;Rapp et al., 2016).In particular, YouTube videos from surgeons, academic healthcare institutions, and educational companies highlight various procedures, teaching anatomy and surgical techniques.However, several studies have highlighted the variability in quality and reliability of these videos as a surgical instructional tool regardless of the procedure (Aktoz et al., 2022;Chorath et al., 2021;De La Torre et al., 2021;Fischer et al., 2013;Reitano et al., 2021;Yammine & Assi, 2020).Several efforts have been made to resolve this particular concern, such as developing a standard screening checklist.The Laparoscopic Surgery Video Educational Guidelines (LAP-VEGaS) were developed in 2018 by an international multispecialty joint trainers and trainees as a consensus on how to assess laparoscopic surgery videos for educational purposes (Celentano et al., 2018;Muller & Baker, 2022;Yammine & Assi, 2020).Three years after the creation of these guidelines, the validated LAP-VEGaS video assessment tool was introduced as a standard appraisal for surgical videos submitted for publication/presentation including procedures outside the scope of laparoscopic surgery (Celentano et al., 2021;Chapman et al., 2021;Chorath et al., 2021;De La Torre et al., 2021;Reitano et al., 2021).
Within the field of Plastic and Reconstructive Surgery (PRS), free tissue transfers are considered one of the essential, yet difficult skills to master.Although professional training courses exist, cost, location, and work schedules are potential hurdles that prevent participation (Margulies et al., 2020).As web-based technology advances, a large number of surgeons, are utilizing online videos for continued education and surgical preparation (Kwon et al., 2019;Mota et al., 2018;Rapp et al., 2016).The purpose of this study was to assess and compare the educational quality of instructional videos on microsurgical flap reconstruction between free public and paid subscription platforms.

| Video search and screening
Video search was conducted on August 9, 2022, using keyword "free flap" on YouTube (http://www.youtube.com)and Plastic and Reconstructive Surgery Webpage.A list of plastic surgery courses as depicted in Figure 1 was also accessed on the webpage of the American Society of Plastic Surgeons Education Network (ASPS edNet) (https:// ednet.plasticsurgery.org/diweb/home).YouTube is a public platform, whereas PRS and ASPS edNet are private/paid.Sample size estimation was based on the LAP-VEGaS scores as a measurement of the educational quality of the videos.To demonstrate three points LAP-VEGaS score difference between the public and private platforms with a significance level (α) of .05 and power (1Àβ) of 80%, a sample size of 26 videos (13 private, 13 public) was calculated assuming an effect size of 1.2 obtained from Aktoz et al. (2022) The search on YouTube was performed on a new incognito window browser to disable data tracking function that could potentially influence the results.
Video contents generated from PRS and ASPS edNet searches were also screened for eligibility.Videos were screened in the order that they appeared on each platform after keyword search.Video screening was concluded once the number of videos that met the inclusion criteria reached the sample size estimation.Videos were excluded if they were animations, did not demonstrate flap harvest technique, or were not in English.Short numbered video clips listed as part of a free flap series were combined and graded as one collective video.Video search and collection were completed by the first author (YK).All videos were kept in a folder, and each was assigned a randomized ID number generated by Research Randomizer (Version 4.0).The resulting video ID numbers were recorded in Microsoft Excel 2019 for Windows and assigned to each reviewer for educational quality grading.

| Assessment of the educational quality
Three reviewers (LM, MA, JL) were blinded to the platform (paid or public) the video belonged to, and each independently rated the educational quality of the videos using a modified version of the LAP-VEGaS tool (Celentano et al., 2021).The original LAP-VEGaS assessment tool is used in laparoscopic surgeries and provides a structural framework for evaluating the educational value and quality of a video.The modified version is based on the same 9 scoring domains of the LAP-VEGaS tool.The only changes were made to replace the "positions of access ports and extraction site" in domain 3 with "positions of free flap donor site and recipient site" as depicted in Table 1.
Each domain was graded as not present (0), partially present (1), or completely present (2).The total score was calculated from the sum of the individual domains, ranging from 0 to 18 (low to high).Quality score was further categorized into a point scale system with 0-6 being low, 7-12 medium, and 13-18 high.

| Data collection and statistical analysis
Video characteristics were documented.Professionally-made videos were identified based on lighting, positioning, and video/imaging quality.For YouTube videos, country of origin of the author, specialty, type of free flap, date of upload, video duration, number of viewers, and likes/dislikes, were recorded.Additionally, like ratio, view ratio, and video power index (VPI) were calculated.Like ratio was reported as a percentage (like count Â 100/ [like + dislike]).View ratio was calculated as (viewership/time since upload).Video power index (VPI) was used to assess video popularity and was determined by (like ratio Â view ratio/100).For videos collected from PRS and ASPS edNet, date of upload, video duration, types of free flap, authors, and specialty were recorded.Days since upload for each video were calculated utilizing August 9, 2022, as the reference date, when available.
Interrater reliability for video quality rating was calculated using Cronbach's Alpha (α) to ensure concordance between the three reviewers.
Pearson's correlation coefficients (r) were used to assess correlations between viewership, view ratio, likes, like ratio, video power index, video length, days since upload, and total quality score.Pearson's chisquared test was conducted to assess correlation between professionally-made videos and video sources.Mood's median tests were performed to identify correlation between days since upload and video platform, and differences in quality score between (1) professionally-made and non-professionally made videos, and (2) public and paid platforms.All statistical analyses and data collection were performed using IBM SPSS Statistics (Version 28.0) and Microsoft Excel 2019 for Windows.Statistical significance was defined by a p-value less than .05.

| Descriptive video characteristics
A total of 357 videos were screened.After removing duplicates and ineligible videos, 76 videos published between 2011 and 2022 were included for final analysis (40 YouTube, 28 PRS, and 8 ASPS edNet).
The search and screening process is depicted in Figure 1.Characteristics of each video are presented in Table 2 (public videos) and Table 3 (paid videos).Thirteen (32.5%) public videos were published in the United States, followed by nine videos (22.5%) in South Korea and eight (20%) in India.In contrast, 25 (69%) paid videos were published in the United States.The authors of the public videos were plastic surgeons (20; 50%), otolaryngologists (13; 32.5%), and oromaxillofacial surgeons (6; 15%), whereas all the private videos were created by plastic surgeons (36; 100%).The most common type of flap from the public source was fibula flap (9; 22.5%), followed by anterolateral F I G U R E 1 Flow chart for video screening and review process.The public video source was represented by YouTube, whereas private sources were PRS and ASPS edNet.Video searches were conducted using keyword "free flap" on YouTube and PRS webpage.A list of plastic surgery courses as depicted in the figure was also accessed on the ASPS edNet webpage.A total of 357 videos were screened from the public source (n = 95) and private sources (n = 262).After removing ineligible videos and duplicates, the remaining 114 private videos were evaluated and combined as one collective video if they were listed as part of a free flap series.A final 76 videos were included for analysis (40 YouTube, 28 PRS, and 8 ASPS edNet).ASPS edNet, American Society of Plastic Surgeons Education Network; PRS, plastic and reconstructive surgery.thigh flap (7; 17.5%) and radial forearm flap (7; 17.5%).Whereas transverse upper gracilis flap (5; 14%), deep inferior epigastric flap (5; 14%), fibula flap (3; 8%), and profunda artery flap (3; 8%) were the most common flaps from private sources.The median days since upload were 923 [862] and 2099.5 [1678] days for the public and paid platforms, respectively.The length of the video ranged from 1 min 18 s to 1 h 22 min and 45 s (public 9.43 [12.33] min; paid 5.07 [6.4]).

| Educational quality of the videos
Interrater reliability between the three reviewers was high (α = .9).
Quality score breakdown per grading domain is listed in Table 1.
Two out of the nine domains were rated with a median score below 1: formal presentation of the case 0.  2).More recent videos were correlated with the public platform ( p = .005).The overall quality score was not correlated with video length (r = .1,p = .15),number of views (r = .3,p = .06),days since upload (r = .07,p = .55),or like ratio (r = .01,p = .54).However, correlations were identified between total scores and likes (r = .34,p = .03),view ratio (r = .33,p = .04),video power index (r = .33,p = .04),and professionally-made videos ( p = .03).No correlation was identified between professionally-made videos and video sources ( p = .24).Table 4 summarizes correlations between video quality score and different variables.

| DISCUSSION
Online video-based learning has emerged as one of the most frequently utilized resources for surgical education (Kwon et al., 2019;Mota et al., 2018;Rapp et al., 2016).Watching a surgical technique video allows viewers to familiarize themselves with the operative steps and visualize the critical anatomy, contributing to case preparation.In particular, YouTube was reported as the most utilized video source for surgery preparation and was identified as a potentially effective learning method for microsurgical techniques (Choi et al., 2022; T A B L E 1 Median and IQR scores by quality domain.Q1.Authors and Institution information.Title of the video includes name of the procedure and pathology treated Q2. Formal presentation of the case, including patient details and imaging, indication for surgery, comorbidities and previous surgery.Patient anonymity is maintained Q3. Position of patient, surgical team, free flap donor site, and free flap recipient site Q4.The surgical procedure is presented in a standardized step by step fashion Q5.The intraoperative findings are clearly demonstrated, with constant reference to the anatomy Q6. Relevant outcomes of the procedure are presented, including operating time, postoperative morbidity, and histology when appropriate Q7.Additional graphic aid is included such as diagrams, snapshots, and photos to demonstrate anatomical landmarks, relevant or unexpected findings, or to present additional educational content Q8. Audio/written commentary in English language is provided 2 Q9.The image quality is appropriate with a constant clear view of the operating field.The video is fluent with appropriate speed a Indicates statistical significance. T A B L E 2 Summary of video characteristics and educational quality of the public videos.T A B L E 3 Summary of video characteristics and educational quality of the paid videos.et al., 2022;Besmens et al., 2021;Chapman et al., 2021;Choi et al., 2022;Chorath et al., 2021;De La Torre et al., 2021;Derakhshan et al., 2019;Fischer et al., 2013;Savran et al., 2022;Yammine & Assi, 2020).These findings were primarily due to the lack of formal discussion of the procedures such as indications, contraindications, risks, complications, and post-operative management.This trend was also observed in our study, reflected by the variable quality and substandard median scores for both the public and private videos.In terms of scoring, two domains, a formal presentation of the case and relevant outcomes of the procedure, received the lowest scores for videos on both platforms (Table 1).Of note, most videos focused only on the technical aspects of the surgery and excluded medical information regarding the patient.These included past medical/surgical history, imaging, and indications for the procedure as well as outcomes and post-operative management, thereby missing elements for the two scoring domains.These missing elements were also reflected in a study conducted by  (Savran et al., 2022).We acknowledge the focus of these videos, on surgical technique, however, since pre-operative planning and post-operative management are integral parts of the procedure, and of utmost importance for success, we recommend including a short summary of the case prior to surgery demonstration and a closing statement briefly mentioning post-operative management.It is of surprise that paid videos received lower scores in the domain of appropriate image quality, speed, and operative view since these videos undergo strict appraisal prior to publication, in contrast to the unchecked nature of publicly-sourced videos.High scores in audiovisual features of the public videos were also reported by Derakhshan et al. (Derakhshan et al., 2019) Granted that the median score of paid videos for this domain is 2 (completely present), the IQR of 1 indicated variation in scores among the paid sources.Furthermore, although theoretically there should be more professionally-made content on the paid platforms compared to public platforms, we identified no differences.

Video
The unexpected suboptimal imaging/speed quality of the paid videos revealed an important area of improvement for paid, subscription platforms.The results of this study suggest comparable educational quality on free tissue transfer between public and paid sources, bringing into question the rationale behind subscription fees.
In this study, YouTube was used to represent a free, public platform given its unparalleled popularity attributed to its user-friendly interface and the vast number of results that are returned by a single search.Although YouTube is undoubtedly a mainstream resource for surgical preparation, viewers should keep the unchecked nature of their content in mind (Choi et al., 2022;Derakhshan et al., 2019;Gray et al., 2020;Kwon et al., 2019;Mota et al., 2018;Rapp et al., 2016).
Unlike other peer-reviewed forums such as journals and medical associations/societies, there is no peer review process to evaluate You-Tube videos, highlighting the possibility of circulating misinformation.et al., 2019;Rodriguez et al., 2018).Additionally, literature has recognized the variable quality and reliability of YouTube videos across different surgical procedures (Aktoz et al., 2022;Chorath et al., 2021;De La Torre et al., 2021;Fischer et al., 2013;Reitano et al., 2021;Yammine & Assi, 2020).Although no differences in educational quality between public and paid sources were identified in this study, varia-  4).Similarly, no significant correlations were identified between these factors and video educational quality on upper blepharoplasty, general microsurgery, and endoscopic endonasal surgery (Besmens et al., 2021;De La Torre et al., 2021;Fernandez-Diaz et al., 2022).
Several limitations are present in this study.First, the LAP-VEGaS tool was initially validated to evaluate laparoscopic videos and has not been applied to microsurgery procedures (Celentano et al., 2018;Celentano et al., 2021).It was selected for use in this study given its relevance, and targeted purposes for assessing the educational quality of surgical technique videos (Aktoz et al., 2022;Celentano et al., 2021;Chapman et al., 2021;Chorath et al., 2021;De La Torre et al., 2021;Reitano et al., 2021).This approach reflects the educational quality of the technique demonstration from a trained surgeon/ resident perspective compared with the non-specific, generalized scoring categories listed in other assessment tools such as DISCERN questionnaire, Journal of the American Medical Association benchmark criteria, and Global Quality Score (Bernard et al., 2007;Charnock et al., 1999;Silberg et al., 1997).Furthermore, the above assessment tools are widely utilized in non-surgical videos and for patient education purposes (Kunze et al., 2022;Kwak et al., 2022;Om et al., 2021;Ward et al., 2019;Zhang et al., 2022).Notably, the 5 [1] and relevant outcomes of procedure 0 [1].Four domains had a median score of 2: standardized step-by-step description of the surgical procedure 2 [1], clear intraoperative findings and constant reference to anatomy 2 [1], audio/ written commentary in English 2 [0], and appropriate image quality, speed, and clear operating view 2 [1].Public videos demonstrated significantly higher scores in one domain: appropriate image quality, speed, and clear operating view (p = .0002).In terms of the general educational quality, 18 videos were rated as high quality, 16 medium, and 6 low from the public platform with a total median score of 12 [5].Conversely, there were 13 high-quality videos, 21 medium, and 2 low from paid platforms 12 [5].No significant difference in total score was detected between public and paid sources ( p = .94)(Figure Abbreviations: ENT, otolaryngology; FC, fasciocutaneous flap; LR, like ratio; M, muscle flap; MC, myocutaneous flap; NA, non-applicable; O, bone flap; OC, osteocutaneous flap; OMC, osteomyocutaneous flap; OMFS, oral maxillofacial surgery; Ortho, orthopedic surgery; PRS, plastic and reconstructive surgery; PU, public; VPI, video power index; VR, view ratio.
Rapp et al., 2016).However, in addition to YouTube videos, many surgeons/residents also utilize content provided by professional organizations which require subscription fees.Current literature indicates an overall suboptimal educational quality of surgical technique videos across different specialties, including PRS, Otolaryngology, Obstetrics and Gynecology, Orthopedic Surgery, and General Surgery (Aktoz Specifically, Derakhshan et al. found technical and ethical concerns including an incorrectly termed procedure, a false approach in making skin incisions, and an operation performed on a direct relative in three rhytidectomy videos.Furthermore, Rodriguez et al. identified potential safety violations regarding dissection and cauterizing techniques in five out of 10 videos on laparoscopic cholecystectomy (Derakhshan -, non-reported; F, fat flap; FC, fasciocutaneous flap; M, muscle flap; MC, myocutaneous flap; OC, osteocutaneous; OM, osteomuscular flap. Derakhshan et al. in which they assessed the educational quality of rhytidectomy videos and concluded that the overall videos failed to discuss indications, outcomes, and peri and post-operative management (Derakhshan et al., 2019).Likewise, Besmens et al. evaluated upper T A B L E 3 (Continued) T A B L E 4 Correlations between quality score and variables.Proposed checklist for instructional videos on free tissue transfers.
a Indicates statistical significance.T A B L E 5