Artificial intelligence, machine learning and neural networks-based deep learning are concepts that have recently come to dominate venture capital funding, startup formation, promotion and exits and policy discussions. The highly-publicized triumphs over humans in Go and Poker, rapid progress in speech recognition, image identification, and language translation, and the proliferation of talking and texting virtual assistants and chatbots, have helped inflate the market cap of Apple (#1 as of February 17), Google (#2), Microsoft (#3), Amazon (#5), and Facebook (#6).
While these companies dominate the headlines—and the war for the relevant talent—other companies that have been analyzing data or providing tools for analysis for years are also capitalizing on recent AI advances. A case in point are Equifax and SAS: The former developing deep learning tools to improve credit scoring and the latter adding new deep learning functionality to its data mining tools and offering a deep learning API.
Both companies have a lot of experience in what they do. Equifax, founded in 1899, is a credit reporting agency, collecting and analyzing data on more than 820 million consumers and more than 91 million businesses worldwide. SAS, founded in 1976, develops and sells data analytics and data management software.
The AI concepts that make headlines today also have a long history. Moving beyond speedy calculation, two approaches emerged in the 1950s to applying early computers to other type of cognitive work. One was labeled “artificial intelligence,” the other “machine learning” (a decidedly less sexy and attention-grabbing name). While the artificial intelligence approach was related to symbolic logic, a branch of mathematics, the machine-learning approach was related to statistics. And there was another important distinction between the two: The artificial intelligence approach was part of the dominant computer science paradigm and the practice of a programmer defining what the computer had to do by coding an algorithm, a model, a program in a programming language. The machine-learning approach relied on data and on statistical procedures that found patterns in the data or classified the data into different buckets, allowing the computer to “learn” (e.g., optimize the performance—accuracy—of a certain task) and “predict” (e.g., classify or put in different buckets) the type of new data that is fed to it.