SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

Objectives

Reuniting children with their families after a disaster poses unique challenges. The objective was to pilot test the ability of a novel image-based tool to assist a parent in identifying a picture of his or her children.

Methods

A previously developed image-based indexing and retrieval tool that employs two advanced vision search algorithms was used. One algorithm, Feature-Attribute-Matching, extracts facial features (skin color, eye color, and age) of a photograph and then matches according to parental input. The other algorithm, User-Feedback, allows parents to choose children on the screen that appear similar to theirs and then reprioritizes the images in the database. This was piloted in a convenience sample of parent–child pairs in a pediatric tertiary care hospital. A photograph of each participating child was added to a preexisting image database. A double-blind randomized crossover trial was performed to measure the percentage of database reviewed and time using the Feature-Attribute-Matching-plus-User-Feedback strategy or User-Feedback strategy only. Search results were compared to a theoretical random search. Afterward, parents completed a survey evaluating satisfaction.

Results

Fifty-one parent–child pairs completed the study. The Feature-Attribute-Matching-plus-User-Feedback strategy was superior to the User-Feedback strategy in decreasing the percentage of database reviewed (mean ± SD = 24.1 ± 20.1% vs. 35.6 ± 27.2%; mean difference = −11.5%; 95% confidence interval [CI] = −21.5% to −1.4%; p = 0.03). Both were superior to the random search (p < 0.001). Time for both searches was similar despite fewer images reviewed in the Feature-Attribute-Matching-plus-User-Feedback strategy. Sixty-eight percent of parents were satisfied with the search and 87% felt that this tool would be very or extremely helpful in a disaster.

Conclusions

This novel image-based reunification system reduced the number of images reviewed before parents identified their children. This technology could be further developed to assist future family reunifications in a disaster.

Resumen

Una Herramienta Novedosa Basada en Imágenes para Reunir a los Niños con sus Familias tras las Catástrofes

Objetivos

El reunir a los niños con sus familias tras una catástrofe representa un desafío excepcional. El objetivo fue poner a prueba la capacidad de una herramienta novedosa basada en imágenes para ayudar a un padre en la identificación de una foto de su hijo.

Método

Se usó una herramienta previamente desarrollada basada en la clasificación y recuperación de imágenes que emplea dos algoritmos avanzados de búsqueda visuales. Un algoritmo, Rasgo-Atributo-Emparejamiento (Feature-Attribute-Matching), extrae rasgos faciales (color de piel, color de ojos y edad) de una fotografía y después los empareja según los datos introducidos por los padres. El otro algoritmo, Usuario-Retroalimentación (User-Feedback), permite a los padres elegir en la pantalla a los niños con similitudes a los suyos y después prioriza nuevamente las imágenes en la base de datos. Este estudio se realizó en una muestra de parejas de padre-hijo en un hospital terciario pediátrico. Una fotografía de cada hijo participante se incluyó en una base de datos de imágenes preexistente. Se realizó un ensayo cruzado doble ciego con asignación aleatorizada para medir el porcentaje de revisiones de la base de datos y el tiempo empleado usando la estrategia Rasgo-Atributo-Emparejamiento más Usuario-Retroalimentación o la estrategia Usuario-Retroalimentación de forma aislada. Se compararon los resultados de la búsqueda con una búsqueda teórica al azar. Posteriormente, los padres completaron una encuesta de satisfacción.

Resultados

Cincuenta y una parejas de padre e hijo completaron el estudio. La estrategia Rasgo-Atributo-Emparejamiento más Usuario-Retroalimentación fue superior a la estrategia Usuario-Retroalimentación en el menor porcentaje de revisiones de la base de datos (media 24,1%, DE ± 20,1% vs. 35,6%, DE ± 27,2%; diferencia de la media −11,5%, intervalo de confianza 95% = −21,5% a −1,4%; p = 0,03). Ambas estrategias fueron superiores a la búsqueda al azar (p < 0,001). El tiempo para ambas búsquedas fue similar a pesar del menor número de imágenes revisadas en la estrategia Rasgo-Atributo-Emparejamiento más Usuario-Retroalimentación. El 68% de los padres estaba satisfecho con la búsqueda y el 87% expresó que esta herramienta sería muy o extremadamente útil en una catástrofe.

Conclusiones

Este sistema de reunificación novedoso basado en imágenes redujo el número de imágenes revisadas previamente a que los padres identificaran a sus hijos. Esta tecnología podría desarrollarse más para ayudar a futuras reunificaciones familiares en una catástrofe.

Natural and manmade disasters are low-probability but high-impact events that cause a large number of illnesses or injuries.[1] One common feature after disasters is the separation of children from their families and the subsequent challenges with family reunification. Recognizing these difficulties, multiple international and national organizations, including the World Health Organization and the National Commission on Children and Disasters, have advocated for developing more efficient systems to expedite family reunification.[1-7]

In response, many voluntary national and nongovernmental organizations (including social media) have created registries specifically designed to assist in family reunification. However, these registries may be of limited use for unaccompanied children (children separated from their families). Current registries incorporate text-based indexing and retrieval systems. Depending on the development of the child, an unaccompanied child may not be able or may be afraid to state his or her name and other identifying information such as close contacts or addresses. Not all registries have a field to indicate that the child is separated from his or her family. Furthermore, these sites typically cannot share information with each other.[8, 9]

In 2007, we proposed a process whereby photographs of unaccompanied children could be uploaded into a central database.[10] Using advanced vision technology, facial features from a photograph such as eye, skin, and hair color would automatically be extracted and indexed. When parents later described their missing children, the tool could reprioritize the photographs to show the “best fit.” Theoretically, this tool would be able to decrease the work burden of disaster relief personnel during a time when resources are scarce and simultaneously provide a method to index photographs to allow for faster family reunification.

Our primary objective was to assess the performance of the image-based reunification tool to assist a parent in identifying a picture of his or her child. Our secondary objectives were to assess the concordance of the tool's automated facial extraction of eye color, skin color, and age compared to parents' responses and to survey parents' satisfaction with the tool. To accomplish this, we piloted the tool in a prospective cohort of children and their caregivers.

Methods

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

Study Design

We performed a double-blind randomized crossover trial to measure the percentage of database reviewed and time spent by the parent in using the reunification tool to assist in identifying a photograph of his or her child. This trial tested two different search strategies in the reunification tool: Feature-Attribute-Matching-plus-User-Feedback and User-Feedback only. These two different search strategies were then compared to a theoretical random model and Facial-Attribute-Matching-Only model. The study institution's Committee on Clinical Investigation approved the study. Written informed consent was provided by all participants.

Study Setting and Population

We enrolled a prospective convenience sample of parent–child pairs over a 3-month period between November 2010 and January 2011. Participating parent–child pairs were from one of the following three groups: emergency department (ED) patients, inpatients, or family members of ED staff, all from a single tertiary care children's hospital.

We included children ages 0 to 18 years with a custodial parent. For children in the ED and inpatient, if the clinical team felt that the child required emergent medical treatment, the family was not approached. We excluded families who did not complete the study protocol, as well as children with congenital facial anomalies. We enrolled only English-speaking parents.

Development of an Image-based Reunification Tool

The custom–made tool incorporated content-based image retrieval algorithms and had undergone a battery of laboratory testing.[11] Two advanced vision search algorithms (Facial-Attribute-Matching and User-Feedback) and a database of pediatric images were created.

Facial-Attribute-Matching Algorithm

From an uploaded photograph, the Facial-Attribute-Matching algorithm automatically extracted eye color, skin color, and age. “Eye color” generated two categories: brown (light brown, dark brown, and hazel) and blue (blue, green, and gray). “Skin color” generated two categories: light and dark. “Age” had four categories: 0 to 12 months, 13 to 23 months, 2 to 4 years, and over 5 years of age.

With the assistance of a software operator, a parent could then input eye color, skin color, and age of a child into the tool. Parents chose from pictures of six eye colors (dark brown, light brown, hazel, blue, green, and gray) and a palette of eight skin tone colors from light to dark. Age was grouped in four categories as described in the previous paragraph. Through reordering, the database of children first displayed photographs with facial attributes exactly matching the parent's input (i.e., same eye, skin, and age categories), followed by photographs that provided the next best match.

User-Feedback Algorithm

After presentation of children in the database, on each screen the user had the option of providing feedback by choosing one or more images that he or she thought looked similar to his or her child. Each time a user selected similar images, the remaining photographs in the database were reordered to take into account the selection.

Database of Pediatric Images

Using publicly available photographs, the study team created a database of 1,213 children downloaded from parenting.com to simulate a large-scale disaster. Selected photographs were of high resolution depicting the child as forward facing, eyes open, and with minimal or no facial rotation. Since the photographs obtained were taken under nonstandardized conditions and varied in quality, an online workforce that has been used in social science research, Amazon Mechanical Turk (Amazon.com, Inc., Seattle, WA),[12, 13] determined eye color and skin color in each photograph. Each photograph in the database was evaluated by five different people from Mechanical Turk. The assigned eye color and skin color was used if at least three out of the five people agreed. For those images that did not fit the criteria of agreement, the study investigators hand-labeled the respective attributes.[11] The team also downloaded age groups from parenting.com.[11] Information regarding ethnicity or race was not available.

Image-based Tool Search Strategies

We created two different search strategies for the purposes of this study. The first incorporated both Facial-Attribute-Matching and User-Feedback. The alternate approach incorporated User-Feedback only. Along with indexing and retrieval tools, these strategies were downloaded to laptops for field testing.

Theoretical Random and Facial-Attribute-Matching-Only Model

Two additional search strategies, a random search and Facial-Attribute-Matching-Only, were evaluated, although parents did not use these algorithms directly. The performance of the theoretical random model was based on the assumption that if the images were shown in a completely random order, the ranking of the target image, i.e., the image's position in the sequence of images shown to the parent, would follow a uniform distribution. For example, the median and interquartile range (IQR) ranking of the target image was assumed to be 50% (IQR = 25% to 75%).

Until a parent provided feedback, the Facial-Attribute-Matching-plus-User-Feedback strategy presented the images in an order based only on facial attribute matching. We used this initial ranking of images in the database to evaluate the performance of a Facial-Attribute-Matching-Only search, taking the target image's initial ranking in the database as the outcome.

Study Protocol

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

Study staff reviewed a standard slide set about the challenges of family reunification after a disaster and introduced the novel imaged-based reunification tool with every participating parent. Study staff then photographed each participating child up to three times with a Canon PowerShot Model SD1100 IS (Canon, Tokyo, Japan) camera with predetermined settings. Ideally, the child was facing forward toward the camera with eyes open and with minimal or no facial rotation. The research assistant then selected the photograph that best depicted the child facing the camera and added the photo to the tool's database of images.

Staff asked each parent to input his or her child's eye color, hair color, and age, after which the parent performed two searches for the child. Parents were only told that they would be testing two different ways of searching. The MATLAB's (MathWorks, Inc., Natick, MA) random number generator determined the order of the two search strategies through simple randomization. Both the parent and the study staff assisting the parent were blinded to the order.

To find the target child, parents viewed successive screens of children, each displaying nine photographs (Figure 1). Until a parent provided feedback, he or she was presented with photographs as follows. For the search using Facial-Attribute-Matching-plus-User-Feedback, the database of children reordered its information to first display photographs with facial attributes exactly matching the parent's input (i.e., same eye, skin, and age categories), followed by photographs that provided the next best match. For the search with User-Feedback alone, the first screen showed a random set of photos followed by photographs displayed in the original order of photographs added to the pediatric facial database. Consequently, in the User-Feedback alone search, the position of the target child was at the end of the database until a parent provided feedback.

image

Figure 1. Using publicly available photographs, the study team created a database of 1,213 children downloaded from parenting.com to simulate a large-scale disaster.

Download figure to PowerPoint

For each search, the position of the target child was tracked such that the search ended when the target child appeared in the screen, even if the parent missed identifying the child. At the conclusion of both searches, the parent completed a written survey regarding satisfaction and usability of the reunification tool, as well as self-described technology use.

Outcome Measures

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

The primary outcome measures were the percentage of database viewed by the parent, and the time until the targeted child appeared on the screen for each search (Facial-Attribute-Matching-plus-User-Feedback vs. User-Feedback only). These were compared to a theoretical random search and to a Facial-Attribute-Matching only search. Secondary outcome measures were accuracy of the tool's classification of child's facial features compared to parent's responses, the effect of high versus low parental user feedback, and parents' degree of satisfaction.

Data Analysis

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

With 1,214 images in the database, including the target child, and nine images shown on each screen, there was a maximum of 135 screens that could have been viewed during a search. Results are reported as the percentage of the database viewed until the target child appeared on the screen and are shown graphically with box plots.

Paired t–tests were used to make pairwise comparisons between the Facial-Attribute-Matching, User-Feedback, and Facial-Attribute-Matching-plus-User-Feedback strategies. These tests were based on the differences between the percentages of the database from each pair of searches performed by each parent, i.e., each parent was used as his or her own control and any within–person correlation is accounted for. Confidence intervals (CIs) for the mean differences were based on the t–distribution. To test whether a strategy differed from a completely random search, we used a one–sample t–test of the hypothesis that the expected percentage of screens until the target appears equals 50%. We compared the times spent on User-Feedback and Facial-Attribute-Matching-plus-User-Feedback searches with paired analyses as described above for search performance.

We conducted a series of sensitivity analyses to check that our conclusions were robust to the method of statistical analysis. In one alternative analysis, User-Feedback and Facial-Attribute-Matching-plus-User-Feedback were compared using analysis of variance (ANOVA) methods for two–period crossover studies. This method adjusts for sequence and period effects and for within–person correlation. There was no evidence of sequence or period effects (all p > 0.32). Using the Facial-Attribute-Matching, User-Feedback, and Facial-Attribute-Matching-plus-User-Feedback strategies, the percentage of database viewed prior to finding the image of the target child was skewed to the right; a logarithmic transformation yielded more symmetric and bell–shaped distributions. Therefore, in addition to the paired and one–sample t–tests described above, we performed nonparametric Wilcoxon signed rank tests and also repeated the t–tests and ANOVAs after a logarithmic transformation. In these sensitivity analyses, the results remained substantially unchanged, and we report only the t–test results, without transformation.

The accuracy of the photographic reunification tool's automatic extraction of skin color, eye color, and age was compared to the parents' descriptions of skin color, eye color, and age of their children. We calculated both the percentage agreement and the kappa statistic with a 95% CI.

The User-Feedback algorithm relies on parental feedback to update the rank of each image and select the next nine images to display. Based on how many times a parent provided feedback by indicating that an image was similar to his or her child, we calculated the average number of images chosen per screen. We categorized high feedback as choosing an average of more than 0.50 images per screen, and low feedback as at most 0.50 images per screen.

Additional secondary outcomes were analyzed with graphical and other descriptive methods. All p–values are two–sided and considered statistically significant when ≤ 0.05. SAS version 9.2 (SAS Institute, Cary, NC) was used for data analysis.

Results

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

We approached 71 child–parent pairs, and 53 (75%) provided informed consent. Of these, 52 (98%) participated in the study. We excluded one parent who was unable to complete the study protocol due to a disruptive child. One ED staff family participated. One parent repeated the study twice with sibling children. Overall, we enrolled 50 parents and 51 children.

Table 1 describes parental characteristics. Most parents were mothers. Forty-five percent of the parents had a 4-year college graduate degree or higher. Over 80% of the parents reported daily use of the internet and that they post information about themselves on the internet in a chat room or social networking site.

Table 1. Parent Characteristics
ParentNumber (%)
  1. n = 46 respondents except where shown.

Sex
Female40 (87)
Male6 (13)
Education
Did not graduate high school2 (4)
High school graduate/GED9 (20)
Some college or 2-year degree14 (30)
4-year college graduate7 (15)
More than 4-year graduate14 (30)
Use of computer
Once a month or less0 (0)
A few times a month3 (7)
A few times a week or more3 (7)
Once a day or more40 (87)
Use of internet (n = 45)
Once a month or less0 (0)
A few times a month2 (4)
A few times a week or more5 (11)
Once a day or more38 (84)
Have posted information about self on internet (n = 44)
Yes35 (80)
No9 (20)

Table 2 describes the characteristics of the study subjects as described by parents and the characteristics of the pediatric database as determined by consensus (hair and eye color) or downloaded (age). More parents identified their children as having dark skin and brown/hazel eyes. While the number of the images in the database was evenly spread in the four age groups, half of the children in the study group were in the over 5 years of age group.

Table 2. Characteristics of Study Subjects and Simulated Database of Disaster Victims
AttributeStudy Subjects (= 51)Image Database (n = 1,213)p–valuea
  1. Values reported as n (%).

  2. Study subjects' characteristics were determined by parents; characteristics of database determined by consensus or downloaded with images from parenting.com.

  3. a

    Fisher's exact test.

Skin color
Light10 (20)779 (64)0.03
Dark41 (80)434 (36)
Eye color
Brown/hazel34 (67)974 (80)<0.001
Blue/green/gray17 (33)239 (20)
Age
0–12 months5 (10)319 (26)<0.001
13–23 months6 (12)307 (25)
2–4 years14 (27)303 (25)
≥5 years26 (51)284 (23)

Performance of the different search strategies is summarized with the percentage of the theoretical maximum number of screens viewed until the target image appears (Figure 2). Compared with a completely random search, for which the expected percentage of the database viewed by the parent is 50%, the other search strategies each significantly decreased the percentage of the database viewed by the parent (all p–values < 0.001). The Facial-Attribute-Matching algorithm, however, created the greatest improvement in performance. With regard to the percentage of the database viewed, the User-Feedback–Only algorithm resulted in 35.6% (SD ± 27.2%) of the database being viewed, compared to 26.5% (SD ± 21.2%) when using the Facial-Attribute-Matching-Only algorithm. In addition, incorporating user feedback had little effect on the performance of the Facial-Attribute-Matching–Only algorithm, with performance increasing by only 2.5 percentage points over baseline (95% CI = −0.8% to 5.7%, p = 0.13). In contrast, adding facial attribute matching to the User-Feedback–Only algorithm led to a performance increase over baseline of 11.5 percentage points (95% CI = 1.4% to 21.5%, p = 0.03). The time spent searching the database was similar for the Facial-Attribute-Matching-plus-User-Feedback and the User-Feedback-Only (5.1 minutes vs. 6 minutes, a difference of 0.9 minutes, 95% CI = −0.7 to + 2.6 minutes, p = 0.25).

image

Figure 2. Performance of four search strategies. Random assumes ranking follows uniform distribution. FAM = Facial-Attribute-Matching (FAM–only algorithm uses initial ranking based only on attribute extraction); UF = User-Feedback; FAM+UF = Facial-Attribute-Matching-plus-User-Feedback. The lower and upper boundaries of each box represent the 25th and 75th percentiles, the line within each box and the diamond represent the median and mean, whiskers extend to the most extreme observation within 1.5 IQR units of the 25th and 75th quartiles, and more extreme values are plotted individually. IQR = interquartile range.

Download figure to PowerPoint

In our analysis, we counted screens only up to the point where the target image appeared. In some cases, the parent did not recognize his or her child's picture when it appeared on the screen. This occurred in four out of 51 User-Feedback searches, and 3 of 51 Facial-Attribute-Matching-plus-User-Feedback searches, for a combined rate of 6.9%. No parent failed to recognize his or her child's picture during both searches. Replacing the outcome in these cases with 100%, i.e., assuming the entire database would be searched if the parent failed to recognize his or her child's picture, the means for User-Feedback and Facial-Attribute-Matching-plus-User-Feedback each increased by about 5 percentage points.

Table 3 shows concordance in attributes (skin color, eye color, age) between the parent and the attribute extraction by the tool. There was 59% to 75% concordance on each attribute. For eye color, the tool was more likely to classify the color as blue/green/gray. In 94% of cases, the tool classified age in either the correct category or in an “adjacent” category. When comparing Facial-Attribute-Matching-plus-User-Feedback searches where extracted attributes matched the parent's reported attributes with searches where there was discordance, there were clear differences (Figure 3).

image

Figure 3. Performance of Facial-Attribute-Matching-plus-User-Feedback strategy for individual facial features. Percentage of database viewed for each facial attributes when there was concordance or discordance between parent–provided and automatically extracted attributes. The lower and upper boundaries of each box represent the 25th and 75th percentiles, the line within each box and the diamond represent the median and mean, whiskers extend to the most extreme observation within 1.5 IQR units of the 25th and 75th quartiles, and more extreme values are plotted individually. IQR = interquartile range.

Download figure to PowerPoint

Table 3. Concordance Between Parent and Image-based Reunification Tool System on Skin Color, Eye Color, and Age Category
AttributeConcordance Between Parent and Tool (%)Kappa (95% CI)a
  1. a

    Weighted kappa for age category.

Skin color750.42 (0.15–0.68)
Eye color670.29 (0.06–0.52)
Age590.55 (0.38–0.71)

With regard to the User-Feedback algorithm, the median numbers of images chosen per screen were 0.26 (IQR = 0.08 to 0.63) for User-Feedback and 0.38 (IQR = 0.12 to 0.87) for Facial-Attribute-Matching-plus-User-Feedback searches. Figure 4 shows that search performance is better when the parent provided more feedback.

image

Figure 4. Performance of User-Feedback (UF) and Facial-Attribute-Matching-plus-User-Feedback (FAM+UF) algorithms according to amount of user feedback. High feedback: on average, parent selects ≥ 0.5 images per screen. Low feedback: on average, parent selects < 0.5 images per screen. The lower and upper boundaries of each box represent the 25th and 75th percentiles, the line within each box and the diamond represent the median and mean, whiskers extend to the most extreme observation within 1.5 IQR units of the 25th and 75th quartiles, and more extreme values are plotted individually. IQR = interquartile range.

Download figure to PowerPoint

Of the 51 parent–child pairs, 46 parents (90%) completed the written survey. Over 87% of parents felt this tool would be “very helpful” or “extremely helpful” in a real disaster, and 85% of parents felt that this tool would be “very easy” or “easy” to use in a disaster. With regard to the user feedback option of choosing similar images, 92% of parents like the option “somewhat,” “quite a bit,” or “a lot.” However, only 53% were “very satisfied” or “satisfied” with the similarity of the images displayed after a selection was made. The majority of parents (68%) were very satisfied or satisfied with the results of the search. Parent's level of satisfaction with each search strategy (Facial-Attribute-Matching-plus-User-Feedback vs. User-Feedback) was similar.

Discussion

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

In this study, we describe the performance and the usability of an image-based reunification tool. To our knowledge, this study marks the first occasion that such an application has been tested in a pediatric population. We demonstrate reductions on the order of one-third to one-half in the numbers of images viewed by the parent when using the novel tool when compared to a theoretical random display of images. When the parent provided more feedback to the tool, the result was a decrease in the percentage of database viewed. According to the written survey, the majority of parents felt this type of tool would be helpful during a real disaster.

After natural or manmade disasters, EDs will always be on the front lines to receive and care for victims including children.[1, 14-16] Pediatric victims who are separated from their families may not be able to self-identify or seek out family members due to their age, developmental delay, severe injury, or death.[10] Thus, all EDs in collaboration with their institutions should have plans to assist with family reunification.[17, 18]

The use of photographs to assist with reunification can be a helpful adjunct, especially when the child cannot self-identify. In 2005, Hurricane Katrina separated over 5,000 children from their families. The National Center for Missing and Exploited Children emphasized that photographs were invaluable in reuniting children with their families.[19] Photographs of children were also recommended during the Bam earthquake in Iran.[19, 20] Some have recommended the use of photographs at the state, local, and hospital levels.[21, 22] However, due to limited resources and technical capabilities at the state and local levels with existing systems, families would have to manually search through all photographs to identify their child.

The current tool uses automatic attribute classification of facial features and content-based image retrieval algorithms, both active areas in computer vision research.[23-26] There are limited data regarding the application of content-based retrieval images in children. Therefore, algorithms deployed in the prototype were modifications of previously described algorithms used in the context of adult faces.[23-26] The attribute “sex” was not implemented in the tool given the difficulty with assessing sex in younger children, such as infants. While our study shows that these algorithms were able to reduce the number of images viewed by parents, further research may yield improved pediatric facial attribute accuracy (such as sex) and similarity function, thereby reducing the search even further.

Parents did not identify their children in nearly 7% of the searches. A parent expecting a certain facial expression from his or her child (such as smiling) may actually fail to recognize the child when presented with a different facial expression. There is also the possibility of oversaturation when a parent sees so many facial images that a kind of photographic fatigue sets in, causing the faces to seem less distinct from one another. Finally, prosopagnosia, a neurologic condition that causes a selective deficit in recognition of faces, is thought to affect 2.5% of the white population.[27]

In our study, we sought to evaluate the performance of Facial-Attribute-Matching-plus-User-Feedback and User-Feedback only strategies. Recognizing that automated attribute extraction may not be 100% accurate, there remained a possibility that this discordance would increase the search process. Ultimately, our results show that even though both algorithms (Facial-Attribute-Matching-plus-User-Feedback and User-Feedback) reduce the search compared to a random strategy, the facial attribute matching was the driving force in reducing the number of images searched. However, in a homogenous population, where all the children have the same eye and hair color, facial attribute matching may have limited use in distinguishing children of the same age groups. In these circumstances, having a User-Feedback option to choose similar appearing children may assist in further narrowing the search. The time spent on searches was similar for the two strategies, despite the fewer images viewed in the Facial-Attribute-Matching-plus-User-Feedback strategy. Further work is needed to understand this, as decreasing time to reunification is also critical in disaster situations.

Limitations

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

The study participants had a high education level with a high level of computer familiarity, and thus our findings may not be generalizable to a more diverse population. Second, while facial features were automatically extracted for each child participant, due to the quality of online images, facial features of the database of images were predetermined. These results may represent a “best-case” scenario, and results from actual deployment of the tool may vary. Third, study subjects' characteristics (age, eye color, and skin color) as determined by the parents differed from the images in the database. The performance of the tool could vary depending on the mix of images in the database with various characteristics. Fourth, we assumed that the child could not provide any identifying information verbally, although more than half of our participants were above 5 years of age. If a child is able to provide his or her name or parents' names, this could improve any search. Fifth, the image tool did not use sex or specific age extraction; the addition of these patient characteristics might further improve the tool. Sixth, while there was a reduction in images viewed in the Facial-Attribute-Matching-plus-User-Feedback strategy, the time spent on the two search strategies was similar. This has throughput implications. Seventh, the study was conducted in a simulated environment that may not accurately reflect the effects of stress of a parent and subsequent performance. Additionally, there were parents who did not identify their children on the screen. This situation will likely happen during an actual disaster, and protocols may need to allow for parents to undergo multiple searches.

Conclusions

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References

The photograph-based reunification tool reduced the number of images viewed by a parent looking for their child. The majority of parents surveyed felt that the tool would be helpful in a disaster and was easy to use and were satisfied with their experience. Having such a tool may provide an option of reuniting the most vulnerable children, those with limitations in identifying self, with their families in a disaster. However, actual use of the tool will need further testing and require a vetted standardized community protocol.

The authors thank Lise Nigrovic, MD, MPH, for her critical review of the manuscript. They also thank the research assistants Sandy Wong and Brittany Kronick for their contribution to this project.

References

  1. Top of page
  2. AbstractResumen
  3. Methods
  4. Study Protocol
  5. Outcome Measures
  6. Data Analysis
  7. Results
  8. Discussion
  9. Limitations
  10. Conclusions
  11. References