Isolating Natural Problem Environments in Unconstrained Natural Language Processing: Corruption and Skew

Authors

  • Charles Wong Executive Intelligence, LLC

DOI:

https://doi.org/10.14738/tmlai.53.3229

Keywords:

Natural language processing, skew, corruption, natural behavior, neural networks, intent

Abstract

This work examines the full range of commonly available natural language processors' behaviors in a natural, unconstrained, and unguided environment.  While permissible for typical research to constrain the language environment and to use in-depth knowledge to guide the processor for enhanced accuracy, this work purposefully avoids a clean laboratory in favor of a natural, chaotic, and uncontrollable environment. This shifts the focus towards natural processor behaviors in natural, unknown environments. This work provides a standardized comparison framework to compare and contrast each of a full range of processors' theoretical strengths.  It continues to examine empirical behaviors on a full range of environments from typically used baseline sample documents, to actual raw natural texts used in an intent marketing business, to a series of increasingly corrupted and inconsistent sample documents to further differentiate processor behaviors.  In all cases, the texts are unconstrained and the processors operate in their most naïve, default forms.  Results complement and extend prior work.  It adds that accuracy-centric processors like artificial neural networks or support vector machines require both highly constrained environments and in-depth knowledge of the processor to operate.  Descriptive-centric processors like k-nearest neighbors, Rocchio, and naïve Bayes require only highly constrained environments.  An explanatory-centric neurocognitive processor like Adaptive Resonance Theory can operate robustly with neither environmental constraint nor in-depth processing knowledge, but exposes operations to basic human temporal neurocognitive behaviors.

Author Biography

Charles Wong, Executive Intelligence, LLC

Charles Wong is currently founder and CEO of an independent, industry technology lab, Executive Intelligence.  He has previously worked in banking on Wall Street, as chief scientist in Boston, and has a PhD in cognitive and neural systems.  His focus is on designing cognitive artificial intelligence in natural environments.

References

(1) Nigam, K., McCallum, A., Thrun, S., and Mitchell, T. (1998). Learning to classify text from labeled and unlabeled documents, Machine Learning, 39(2), 103-134.

(2) Roussinov, D., and Chen, H. (1999). Document clustering for electronic meetings: an experimental comparison of two techniques, Decision Support Systems, 27, 67-79.

(3) Yang, Y., and Liu, X. (1999). A re-examination of text categorization methods, Proceedings of the 22nd SIGIR, 42-49.

(4) Sebastiani, F. (2002). Machine learning in automated text categorization, ACM Computing Surveys, 34(1), 1-47.

(5) Jing, L., Huang, H., and Shi, H. (2002). Improved feature selection approach tfdif in text mining, Proceedings of the First International Conference on Machine Learning and Cybernetics, 944-946.

(6) Mittermayer, M. (2004). Forecasting intraday stock price trends with text mining techniques, In: Proceedings 37th Annual Hawaii International Conference on System Sciences (HICSS).

(7) Mooney, R., and Roy, L. (). Content based book recommending using learning for text categorization, In Proceedings of the Fifth ACM Conference on Digital Libraries, 195-204.

(8) Ikonomakis, M., Kotsiantis, S., and Tampakas, V. (2005). Text classification using machine learning techniques, WSEAS Transactions on Computers, 8(4), 966-974.

(9) Kim, H., Howland, P., and Park, H. (2005). Dimension reduction in text classification with support vector machines, Journal of Machine Learning Research, 6, 37–53.

(10) Lan, M., Tan, C., Low, H., and Sung, S., (2005). A comprehensive comparative study on term weighting schemes for text categorization with support vector machines, In Posters Proc. 14th International World Wide Web Conference.

(11) Zhang, W., Yoshida, T., and Tang, X. (2008). Text classification based on multi-word with support vector machine, Knowledge-Based Systems, 21, 879-886.

(12) Ko, Y., (2012). A study of term weighting schemes using class information for text classification, Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, 1029-1030.

(13) Colace, F., De Santo, M., Greco, L., and Napoletano, P. (2014). Text classification using a few labeled examples, Computers in Human Behavior, 30, 689-697.

(14) Rumelhart, D., Hinton, G., & Williams, R. (1986). Learning internal representations by error propagation, Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations, MIT

Press

(15) Rosenblatt, F. (1958). The perceptron: A probabalistic model for information storage and organization in the brain, Psychological Review, 65(6), 386-408.

(16) Cortes, C., & Vapnik, V. (1995). Support-vector networks, Machine Learning, 20, 273-297.

(17) Duda, R., Hart, P., & Stork, D. (2001). Pattern classification, Wiley-Interscience.

(18) Wong, C., and Versace, M. (2011). Context sensitivity with neural networks in financial decision processes, Global Journal of Business Research, 5(5), 27-43.

(19) Wong, C., and Versace, M. (2012). CARTMAP: a neural network method for automated feature selection in financial time series forecasting, Neural Computing and Applications, 21(5), 969-977.

(20) Bruel-Jungerman, E., Rampon, C., & Laroche, S. (2007). Adult hippocampal neurogenesis, synaptic plasticity and memory: Facts and hypotheses, Reviews in Neurosciences, 18(2), 93-114.

(21) Barnea, A. & Nottebaum, F., (1996). Recruitment and replacement of hippocampal neurons in young and adult chickadees: An addition to the theory of hippocampal learning, Proceedings of the National Academy of Sciences of the United States of America, 93(2), 714-718.

(22) Hall, J., Thomas, K., & Everitt, B. (2000). Rapid and selective induction of BDNF expression in the hippocampus during contextual learning, Nature Neuroscience, 3, 533-535.

Downloads

Published

2017-07-13

How to Cite

Wong, C. (2017). Isolating Natural Problem Environments in Unconstrained Natural Language Processing: Corruption and Skew. Transactions on Engineering and Computing Sciences, 5(3), 28. https://doi.org/10.14738/tmlai.53.3229