There are only five ways to categorize information: Alphabet; Time; Location; Category and Hierarchy.
The present technologies organize quickly the first three, and there is little that any company organizing data drawing on these three criteria can gain much relative advantage. Where companies can is in solving the problems of how to categorize information and priorities. Which is where AI comes in.
AI-based companies compete by retrieving data, and identifying patterns – memories - from massive data banks that they have been organizing to access the data fast. Briefly, recognizing patterns in massive data sets, and using these patterns for a variety of purposes is what AI can achieve.
The three features – accuracy, speed and low cost in recognizing patterns - bring us back to “categories” and “hierarchy” to point to the potential strength and limitations of AI, that is the language models and the data upon which the extraction of patterns depends.
AI specialists determine how to classify the data in ways in which they expect users would retrieve them, searching for patterns and priorities. It is astonishing what these models and data sets have already achieved: Predicting ways to complete sentences; Promptly answering questions about history, political events, literature, entertainment, economics, stock and bond markets, and exchange rates; Robotics. How accurately they do so depends on how these specialists categorized the data and decided to position them in terms of credibility. For example, when one asks ChatGP a question about a political event, it replies that the majority interpret it in one way, and a minority in a different way. This is useful, but it still just retrieving memories quickly. AI cannot resolve doubts.
The user must weigh the different data’s credibility. The mere fact that ChatGP, say, answers that most scientists and academics agree on a certain point of view does not imply that it is accurate. AI does not and cannot solve this problem. Mathematical modelling is about logic, and logic knows nothing. It is about consistency. AI using the big “language models” retrieves what is “known”, what is in the data set, and does so fast. AI identifies a deviation in interpretation or opinion about events or a scientific inquiry.
Assume that a single scientist deviates from the orthodoxy reflected in the massive data. AI cannot extrapolate and recommend an action drawing on this deviation – since it is one observation. And an infinite number of lines can be drawn through one point. Yet, it is “deviation” that defines “new knowledge” and requires human “intelligence.” Perhaps it would be far more informative to redefine AI as “RAAM” – “Rapid Access Artificial Memory.”
Which brings us where the big advantage of AI may be in the three-dimensional world, and not in the “world of words.”
Data used in robotics are derived by observing routine, memorized behavior, and organizing them according to categories: how people move their hands, legs, eyes and lips etc. AI scientists extract relationships between them, and transform the routine movements into digital information, which in turn is transformed into standardized robotic movements. The advantage of such movements, be it in factories, or driving is that the use of such standardized data allows avoiding mistakes people make. When doing something, people make inadvertent movements, their attention diverted even for split seconds resulting in producing “noises”- mistakes - when carrying out even the most routine work. They drop things, push a wrong button, are being distracted and misinterpret signals.
In principle, AI could avoid such human “noises,” “deviations” – mistakes - when robots work on assembly lines or when driving cars or directing missiles. For this to happen data must be stored accurately, retrieved speedily, and as far as driving is concerned, the data must be such that it does react fast to deviations from routine patterns of driving on roads – say of a drunken driver, one falling asleep at the wheel. According to recent data (compiled by the company, Waymo vehicles had far less crashes at intersections, with pedestrians etc. compared to human drivers over an equal distance.
Briefly, AI’s future is in robotics, substituting for routine jobs and routine movements, in case of accidents sophisticated prosthesis requiring artificial replacement of missing body parts where the massive deployment of data centers will be required. In the world of “words” its use brings speed and – perhaps – greater accuracy, and better-informed actions.
However, the latter is not a given. If, say, the infamous fraudulent book The Protocols of Elder of Zion is digitized, and millions upload it to the various so-called “social media,” or any other “fake news,” and with 8 billion of people now roaming this planet, enough that 0.01 percent take it literally, that’s 800 thousand. May some act upon such misinformation? Who knows? A dozen or so engineers brought down The Twin Towers on 9/11.
So, can AI be a “threat to democracy,” by bringing about increasing divides and echo chambers, and encouraging conflicts rather than mitigating them?
Perhaps the solution lies with the AI companies and their scientists, in the ways they organize the data and define the hierarchies, holding them liable if the ways they build hierarchies and categories do not stand up to scrutiny. Since Meta, OpenAI and others offer top AI scientists extraordinary compensation packages, with some offers reaching up to $300 million, it may be not too much to ask for certain obligations too.