SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

From the point of view of service providers, a service engagement process begins at the time an opportunity is known and concludes when a proposal for service delivery is resolved (won, lost or canceled). As such, understanding the service engagement process is critical for many businesses. This paper reports an application of text analytics to predict the engagement outcomes of service engagement opportunities based on written text comments about the opportunities during the course of the engagement processes. The comments are attached to documents, which also contain formally prepared solution proposals for potential deals. We examine whether the comments provide value by predicting the outcome of the engagement. Our final data set was 1,000 engagements and approximately 20,000 comments. We designed and carried out two experiments: one building a general classifier that would predict outcomes from comments; and the other building a one-sided classifier that could provide an advance warning for a significant subset of the deals with one particular outcome. The classifier achieved a 96% precision (4 percent false positives) for the cancel class and also a 96% recall on the full set of training documents. Our experiments show the predictive value of comments or service providers during service engagement and provide an interesting indication of trend in the practice of providing comments.


I. Introduction

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

Text analytics provides a powerful approach to identify patterns and insights helpful in understanding large amounts of unstructured, text-based data.[15] In this paper, we use text analytics to explore the business case for inclusion of free-form comments in conjunction with metadata about potential services deals throughout the timeline of the engagement process. The engagement process includes the steps typically occurring between a request for proposals and a signed contract. We would like to understand whether there is benefit in incorporating the use of unstructured text data with structured metadata to provide business insights.[11,18,19] Typical rationale includes improved predictive analytics, decision support, and the ability to capture and record metadata descriptive of practice in unanticipated ways. To accomplish this goal, we studied an existing system used to manage extensive metadata about services engagements. Augmenting the structured metadata is the capability to enter free-form descriptive comments. We use text analytics to understand whether the free-form comments contain additional useful information through techniques based on vocabulary usage statistics, an alternative to natural language understanding.[1,2]

In the system each engagement is responsive to a specific initiator called a request for service (RFS). With normal overloading we refer to a document containing RFS engagement metadata as an RFS. There are three possible outcomes for a completed RFS engagement: Win, Lose, or Cancel.

II. Research Goals

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

Our objectives are two fold. First, we aim to understand the general structure and the implication of the body of comments studied. Second, we apply text analytics techniques to such comments to predict the outcome of a service engagement opportunity during the engagement process. Such predictions help service providers focus on deals with high chances of winning. Note that even after engagement outcomes are realized such predictions could have an important explanatory value. The free form comments might provide clues to the reasons for the outcomes, even when the comment authors are unaware of any causal relationship.

We conjecture that when involved people make free form comments about deals during the normal course of their work, they use different modes of expression and supply hints indicating their degree of optimism or pessimism, perhaps even when that sentiment is not consciously recognized.

Thus, we want to know how well the comments as a whole predict the outcome; but we also would like to know whether we could recognize certain usage patterns in order to make much more accurate predictions from individual comments.[5,7,8,9] Such predictions could support decisions on prioritizing engagements and workload.

Our approach is to create two different classifiers: one that takes the entire corpus of comments from an engagement as input and predicts the outcome; and another that takes individual comments from an engagement as input and makes a one-sided prediction about a single outcome. In this paper we describe the techniques we use to construct each classifier and the measured accuracy.

III. Literature review

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

For our literature review we consider the literature that is relevant in various areas of comment analysis. We believe that there is relevant work in various other fields pertaining to comment analysis, but this work is lacking in our field.

There has been some relevant work in the medical field in terms of comment analysis. We look at the work of Lee who studied the concerns of Taiwanese nurses who were using information systems to comment upon the care they were providing to patients [25]. Also in the field of medicine we have the work of Fernandez-Luque et al. who looked at the comments on personal medical information contained in YouTube videos [23].

Outside the domain of medicine, we consider the work in the business and social networking domain. Starting, with the work of Massari who analyzed the comments in MySpace profiles in order to better understand the interactions users were having based on what they were discussing [21]. Next, we consider the work of Prefontaine et al. who considered a more business take on comment analysis. They analyzed the comments that were made during the June 2008 Basel committee of the Bank for International Settlements [22]. And finally, we consider the work of Fong who analyzed customer satisfaction survey comments in the realm of interlibrary loan services in research environments. [24].

From this brief literature we can see that there is work across many different areas, but there doesn't seem to be any work currently existing in our area. We are specifically looking at the service engagement in this work and the comments generated during a service engagement and while some of the work in the business sector graces the tip of relevance, they are concerned more with customer surveys as well as the financial sector. As a result, we present our work based on the fact that very little work currently exists in our area.

IV. What is eClassfier?

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

Independently of our work, IBM Research produced a text analytics tool named eClassifier in order to classify free form text descriptions of problems presented to call-center agents. [3,4,14,17] This tool runs in a stand-alone mode on a client workstation and is capable of analyzing large volumes of documents, using a suite of sophisticated algorithms.[6,10,13,15] It also provides advanced visualization, various metrics, and statistical tools to analyze word frequency and correlate combinations of terms and classes. We used the word frequency analysis capability to generate an initial understanding of the set of documents prior to data cleansing. eClassifier constructs, trains, and tests software classifiers. Such classifiers dynamically creates executable codes, which enable a user to create a taxonomy of classifications, build predictive models, and then use these models in a run-time environment as well as through the eClassifier graphical user interface.

As described more fully in the references, eClassifier uses an exploratory analytics approach to deriving taxonomies from unstructured information. A bag-of-words feature space is created automatically from statistically meaningful terms (unigrams and bigrams). Terms are then ranked according to a metric called “cohesion” which measures the intradocument similarity in that feature space of all words that contain a given term. Terms with the highest cohesion are then used a cluster generators. This approach tends to provide a set of very easy to understand clusters, each with a single term serving as both cluster name and definition. The tool further provides an editing facility that allows the user to easily refine a given taxonomy to improve its overall cohesion (intra-cluster similarity) and distinctness (inter-cluster differences). Once a taxonomy is complete, it can serve as a kind of training set to a model builder that uses a library of classification tools (such as Rule Induction, Naive Bayes, and Support Vector Machine) to build a highly specific classification engine for the given classification.

V. Text Pre-Processing and Parsing

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

Our data consist of (i) free form text comments organized by individual deals and (ii) the deal outcomes (win, lose, or cancel). Win, lose, or cancel is decided by the actions of the organization requesting the RFS. So, if the organization chose a company other than IBM to provide their solution then the deal is considered to be a “lose.” Likewise, if the organization selects IBM's offer or cancels and does not select any offer then the deal is label as “win” or “cancel.” The comment text is generated by internal IBM solutions architects as well as their managers. Also, the text corpus comes from many different solutions teams as well as involving many different organizations requesting RFS. Although, we are not permitted to name any of these organizations or solutions teams. There are standard formats for the comments (author, date, time, and text) but these are not uniformly enforced, and not all fields are required to be provided. Thus, our pre-processing includes an initial data cleansing step in which comments with atypical formats are discarded.

Subsequent pre-processing enables two distinct approaches: (i) treating each comment as one input to the eClassifier, and (ii) treating all comments associated with one deal as one input to the eClassifier. These two approaches correspond to the two text analytics strategies discussed earlier.

  • A.
    eClassifier Format The input for eClassifier is a file containing all of the documents delimited by carriage returns.[15] We developed automated scripts to transform raw text into the proper format in order to expedite the text analytics process.
  • B.
    Removal of Names and Dates Our initial word frequency study showed that many comments contained specific employee names who were members of the engagement team. Particular names of team members dominated the data and skewed the results. Our approach was to ignore specific names and specific date/time combinations since our goal was to investigate whether general predictions about deal outcomes could be made from the comments. Thus we removed all names and date/time combinations from the comments. Names and date/times were especially prevalent in deals of shorter duration. Several scripts were developed to strip all this information out and return the comments without the name and date timestamps included.
  • C.
    Stop Words and Synonyms In addition to the removal of names and dates, several names that appeared commonly within the comments themselves were added to a stop word list. Four hundred fifty-three common pronouns and articles were also added to the stop word list that we used to specify terms to ignore in text analysis. A synonym list instructed text analysis routines to treat synonyms and words sharing a common stem as if they were the same.
  • D.
    Processing of Dates Although specific author names and dates were eliminated from the comment text, we maintained structured data on documents including the outcome of the engagement, and the date of each comment. We observed a number of comments with dates on or after the date in which the engagement outcome is realized. These late comments often mentioned the outcome. Since we were interested in the possibility of predicting outcomes from comments, we eliminated such late comments from all documents. The examples of processed data are given in the following table:  
Table  . 
No.OutcomeDateComment
1Win2/19/10Network architect is assigned.
  2/12/10Need to provide two options: one from A and one from B.
2Cancel9/10/10Client did not show up to the meeting. Meeting postponed.
  9/15/10Client has not yet provided requirements.

VI. Analysis of Data

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

Our approach to analyzing the comments involved three phases:

  • 1)
    Initial exploration
  • 2)
    Design and training of experimental classifiers
  • 3)
    Testing of experimental classifiers
  • A.
    Initial exploration We began with the organized and cleansed data with each document containing all the comments pertaining to one deal. To gain insight into the data we used an eClassifier method called intuitive clustering. This method clusters the data based on common keywords and keyword combinations found in the text.
    Next, we organized a classification of the resulting documents by deal outcome (win, lose, and cancel). eClassifier provides many different means of classifying documents based on word frequency. A vector space with a dimension corresponding to each word is constructed in which each document is represented as a normalized vector. It also allows the classification of documents by associated structured information in this case, labels indicating the deal outcome.
    An inspection of the classes in an eClassifier visualization (a projection onto the vector space plane containing the centroids of each of the three classes) showed a great deal of overlap among the classes, suggesting that it would be impossible to build a word frequency based classifier that would perfectly or very accurately separate the three classes. We performed word frequency measurements on the classes, looking for words that would help to predict one of the outcomes. This included looking at the term dictionaries for each individual class. A set of words including “hold” and “postpone” were found to be good indicators for the cancel class (with the obvious explanation suggesting a causal relationship). The win and lose classes were a bit harder to distinguish from each other.
    We attempted to create keyword taxonomies to define each class based on commonly occurring keywords. We had minimal success with the cancel class, but the win and lose categories were not as successful. Note that the degree of overlap between win and lose in the visualization which demonstrates this issue.
  • B.
    Design and training of experimental classifiers Data cleansing resulted in a training set of roughly twenty thousand comments associated with one thousand engagements. This was reduced from a set of roughly forty thousand comments and fifteen hundred engagements.
    We next attempted to build a state-of-the-art word frequency-based general classifier for the three classes. eClassifier constructs and compares classifiers for labeled documents using several different classification algorithms. The best such classifier (with the largest number of documents correctly classified) achieved roughly sixty percent accurately classified documents in the training set. Given that the easier to predict cancel class accounted for only roughly 20 percent of the documents, the 60 percent accuracy seemed very reasonable and, if verified on other sets of similar data, would show some mild predictive power. In subsequent sections, we report the results of the verification experiment and discuss methods for exploiting the existence of weak classifiers of this type.
    Figure 2 also visualizes the comments, but now each node corresponds to each of the 14000 comments within an our test data. To provide clearly distinct points at the resolution of the figure we removed 90% of the points. Again, there is a heavy amount of mixing the in the data. We cropped the figure to emphasize the mixing in the central area.
    1
    Having trained the general classifier on training data containing one document per deal, we shifted attention to the development of a very good one-sided predictor. In this case, we were interested in providing an alert facility that would classify one class (cancel) with a high precision (very low number of false positives) but with sufficient recall (sufficient number of correct positives) so that the classifier would recognize some significant subclass of cancel.
    For the one-sided classifier we reorganized the data to provide one comment per document. Each comment was labeled with the outcome of its deal. The best general classifier for single comments had low accuracy (below 50%). Since we were looking for a one-sided classifier that would predict membership in the cancel class, we modified the training set to under represent this class and over represent the others. Our data was ordered by deal, so we removed the first half of the comments associated with cancel outcomes from the training set. This provided a much smaller (and thus more likely classifiable by word frequency) subclass of the cancel class that still respected deals that is, when one comment from a deal belonged to the training subclass, all other comments belonging to that deal also belonged to the training subclass. We trained a test classifier for this training set and then tested the classifier on all the data. The result was a one sided classifier that achieved roughly 96% precision (4 percent false positives) for the cancel class when tested on the entire set of training documents. The recall for the one-sided classifier for cancel was 96.02% percent on the full set of training documents.
    As in the case of the general classifier, subsequent sections will cover the results of testing on new data and discuss how to exploit the predictive power of the one-sided classifier.
thumbnail image

Figure 1. Visualization of Comments Grouped By Engagement Opportunities

Download figure to PowerPoint

thumbnail image

Figure 2. Visualization of Individual Comments

Download figure to PowerPoint

VII. Testing of Experimental Classifiers

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

In this section, we report the results of testing each of the experimental classifiers on a new set of data, arranged as before: the general classifier being applied to data organized as one document per deal; and the one-sided classifier being applied to data organized as on comment per document.

After cleansing, this test set of new data consisted of roughly fourteen thousand comments associated with roughly two thousand engagements. We observed that the reduction in the number of comments per engagement was a result of the elimination of comments in the data cleansing stage rather than a trend toward a smaller number of comments per engagement. Processed data have the same format as the examples provided above.

We observed a general trend toward more non-conforming comments in the later data. The accuracy of the general classifier on the new data was 70.26 percent. Precision of the one-sided classifier for cancel on the new data was 76.03 percent, and recall of the one-sided classifier for cancel on the new data was 58 percent. The data was segmented and randomly selected to generate test and training sets for this experiment.

VIII. Exploiting the classifiers

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

The general classifier provides a better than random method for predicting outcomes from deal comments. Given historically observed outcome rates (expected percentage of deals in each class), we can use these rates to estimate associated properties like expected total contract value per deal and expected resource requirements per deal from the averages of such properties over the entire set of deals. However, applying the general classifier to a set of engagements for which the outcome has not yet been resolved, can improve the current estimates by using the averages obtained from the classified documents in lieu of the general averages.

The one-sided comment classifier recognizes a significant subset of the cancel class and can thus be used to modify engagement priorities dynamically.

We could use the same techniques to produce one-sided classifiers for the win and lose classes. We would not expect to achieve such high precision. But even recognizing a significant subset of one class, can provide systematic re-prioritizing that improves overall engagement outcome rates by shifting efforts toward some engagements at the expense of others.

IX. Conclusion

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES
Table  . 
ClassifierTrainingTest
One-Sided Precision98.02%76.03%
One-Sided Recall96.02%58%
General Classifier64.67%70.26%

Our experimental results show the predictive values of text comments accumulated during the course of service engagements. A system that assesses the likelihood of winning a service engagement opportunity can thus be developed using these text analytics techniques. Such systems can make a huge business impact because service engagement processes often require costly efforts of workforces. When a service provider needs to prioritize engagement opportunities because of limited resources, then it can use the output of such system to determine which opportunity to focus first among all the on-going opportunities at the same phase.

The tools and techniques we have used in our study can be generally applied beyond the specific service business we have studied. For example, one can use informal comments regarding a large team project to generate warnings for the potential delay in the completion of the project.

The proof of the predictive value in our study is accomplished in spite of a significant change in the character of the raw data, the increase in the number of non-conforming comments. Instead of an average of 20 conforming comments per engagement in the training data, the new data had an average of 7 conforming comments.

The change seems to have significantly affected performance of the classifiers. The general classifier accuracy improved from 64.67 percent to 70.26 percent. However the one-sided classifier precision deteriorated from 95.26 percent to 76.03 percent. This kind of variability in the data over time suggests the use of dynamic classifiers trained from moving windows of data in order to keep up with usage trends.

X. Future Research

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES

We plan a few additional experiments to further characterize the behavior of classifiers developed by the method of this.

Recall that we eliminate comments that occur on or after the date of resolution. To further characterize the predictive value of the comment data, we will include only comments with dates more than a selected period prior to the date of resolution. Plotting the accuracy of the classifiers as a function of the period length would provide a more accurate measure of predictive value.

In the case of the one document per deal organization, excluded comments will be removed from the tail of each document. In the case of the one comment per document organization, the excluded comments will be removed from the data.

The changing nature of the data suggests that a dynamic one-sided classifier trained on a recent moving window of data would perform better and more consistently. Training the classifier is an easily automatable process, so the one-sided warning classifier could be replaced every week after being trained on an overlapping set of data corresponding to a sliding window of constant duration. We plan to test this hypothesis of generally improved performance.

REFERENCES

  1. Top of page
  2. Abstract
  3. I. Introduction
  4. II. Research Goals
  5. III. Literature review
  6. IV. What is eClassfier?
  7. V. Text Pre-Processing and Parsing
  8. VI. Analysis of Data
  9. VII. Testing of Experimental Classifiers
  10. VIII. Exploiting the classifiers
  11. IX. Conclusion
  12. X. Future Research
  13. REFERENCES
  • 1
    Ailon, N., M. Charikar, and A. Newman. (2005). Aggregating inconsistent information: ranking and clustering. Proceedings of the thirty-seventh annual ACM Symposium on Theory of Computing. 684-693.
  • 2
    Cattuto, C., V. Loreto, and L. Pietronero. (2007). Semiotic dynamics and collaborative tagging. Proceedings of the National Academy of Sciences, 104(5), 1461.
  • 3
    Cody, W. F., J. T. Kreulen, V. Krishna, and W. S. Spangler. (2002). The integration of business intelligence and knowledge management. IBM Systems Journal, 41(4), 697713.
  • 4
    Dhillon, I. S., D. S. Modha, and W. S. Spangler. (1998). Visualizing class structure of multidimensional data. Computing Science and Statistics.
  • 5
    Foner, L. N. (1995). Clustering and information sharing in an ecology of cooperating agents. Working Notes of the AAAI Spring Symposium on Information Gathering from Heterogeneous, Distributed Environments, Stanford University, Stanford, CA.
  • 6
    Han, E. H. and G. Karypis. (2000). Centroid-based document classification: Analysis and experimental results. Principles of Data Mining and Knowledge Discovery, 116123.
  • 7
    Hotho, A., R. Jäschke, C. Schmitz and G. Stumme. (2006). Information retrieval in folksonomies: Search and ranking. The Semantic Web: Research and Applications, 411426.
  • 8
    Jamjoom, H., H. Qu, M. J. Buco, M. Hernandez, D. Saha and M. Naghshineh. (2009). Crowdsourcing and service delivery. IBM Journal of Research and Development, 53(6).
  • 9
    Lin, H. and J. Davis. (2010). Computational and Crowdsourcing Methods for Extracting Ontological Structure from Folksonomy. The Semantic Web: Research and Applications, 472477.
  • 10
    McCallum, A., and K. Nigam. (1998). A comparison of event models for naive bayes text classification. AAAI-98 workshop on learning for text categorization. 752. 4148.
  • 11
    Marshall, C. C. and A. J. Brush. (2004). Exploring the relationship between personal and public annotations. Proceedings of the 4th ACM/IEEE-CS joint conference on Digital libraries. 349357.
  • 12
    Oliveira, S. R. M., and O. R. Zaiane. (2004). Achieving privacy preservation when sharing data for clustering. Secure Data Management, 6782.
  • 13
    Spangler, W. S., J. Kreulen, J. Lessler and D. E. Johnson. (2002). Modeling Document Taxonomies, IBM Research, Almaden Research Center.
  • 14
    Spangler, W. S., J. Kreulen and J. Lessler. (2003), Generating and browsing multiple taxonomies over a document collection. J. of Management Information Systems. Vol. 19. No. 4, pp 191212. 2003.
  • 15
    Spangler, W. S. and J. Kreulen. (2007). Mining the Talk: Unlocking the Business Value of Unstructured Information, IBM Press.
  • 16
    Spangler, W. S., J. Kreulen and J. F. Newswanger (2006). Machines in the conversation: Detecting themes and trends in informal communication streams. IBM Systems Journal. 785799.
  • 17
    Spangler, W. S., Y. Chen, L. Proctor, A. Lelescu, A. Behal, B. He, T. D. Griffin, A. Liu, B. Wade and T. Davis. (2007). COBRA — Mining Web for Corporate Brand and Reputation Analysis. Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence. 1117.
  • 18
    Visser, P. R. S. and V. A. M. Tamma. (1999). An experience with ontology clustering for information integration. Proceedings of the IJCAI-99 Workshop on Intelligent Information Integration in conjunction with the Sixteenth International Joint Conference on Artificial Intelligence.
  • 19
    Wolfe, J. (2002). Annotation technologies: A software and research review. Computers and Composition. 19(4). 471497.
  • 20
    Nakai, T., Kondo, N., Kise, K. and Matsumoto, K. (2008), Analysis of annotations on documents for recycling of information. Elect. Eng. Jpn., 165: 6068. doi: 10.1002/eej.20516
  • 21
    Massari, L. (2010). Analysis of MySpace user profiles. Information Systems Frontiers, 12(4), 361367. doi:10.1007/s10796-009-9206-8
  • 22
    Prefontaine, J., Desrochers, J., & Godbout, L. (2010). The Analysis Of Comments Received By The BIS On Principles For Sound Liquidity Risk Management And Supervision.? International Business & Economics Research Journal, 9(7), 6572. Retrieved from http://www.journals.cluteonline.com/index.php/IBER/article/view/598
  • 23
    Fernandez-Luque, L., & Elahi, N. (2009). An analysis of personal medical information disclosed in YouTube videos created by patients with multiple sclerosis. Medical Informatics in a United and Healthy Europe, 292296. doi:10.3233/978-1-60750-044-5-292
  • 24
    FONG, Y. S. (n.d.). The value of interlibrary loan: An analysis of customer satisfaction survey comments. Journal of library administration, 23(1-2), 4354. Haworth Press. Retrieved from http://cat.inist.fr/?aModele=afficheN&cpsidt=2635250
  • 25
    Lee, T. (2005). Nurses concerns about using information systems: analysis of comments on a computerized nursing care plan system in Taiwan. Journal of Clinical Nursing, 14(3), 34453. doi:10.1111/j.1365-2702.2004.01060.x