Artificial Intelligence, an addendum
In a previous article, I talked about Artifical Intelligence (AI). Since then, there have been some developments in the AI field. Most recently (November 2022), ChatGPT was released. Have my views changed since then? Not at all. As I said, modern "AI" is nothing more than statistically analysis. But what does that mean?
Many decades ago, before computers were used for such things, psychologists developed personality tests (the first ones used for practical purposes were over 100 years ago). One modern model of human personalities is called the Meyer Briggs Personality Types, which divides them into 12 basic personality types. You can answer, say, 30 carefully-crafted multiple-choice questions and that can be matched against these personality types. From that, general conclusions can be drawn about the test taker that are nearly always insightful. In fact, many people who take these tests are shocked about how insightful these conclusions can be. In fact, now one can take these tests on-line without any other human interaction, and receive these conclusions by answering the questions on a web page. How does this work?
This is an example of statistical analysis, and a quite simple one at that. There are three aspects of this: the model, the data, and the statistical analysis. The model (the 12 personality types) was developed over many years by psychologists who used studies and their experience to determine what the 12 personality types are. The data comes from the person taking the test, who is answering questions. These questions were crafted by psychologists to give them specific correlations to the pre-defined personality types. If you were to lay out these types on a map, then each answer to each question would correspond to a specific location on the map. 30 answers gives you 30 locations on the map. If the questions are answered honestly, the majority of these points on the map will be grouped closely and correspond to a single one of the 12 personality types. Once the type is identified, the computer can spit out a set of general insights which are in excess of 99% correct for the person who provided the answers. That mapping between answers and personality types is the statistical analysis. It is simple enough that a computer isn't even needed to quickly make the determination. This approach is used for all kinds of analysis. You've probably taken one or more tests over the years where you answer some questions and then add up the number you answered a certain way and that provides some sort of insight.
Now let's imagine that instead of 12 personality types, the model was made even more specific - say, 100 personality types. And let's say that instead of 30 questions, you had to answer 300. Much more specific conclusions could be reached. If you are amazed by the accurate of the Meyer Briggs Test, you would be completely shocked by the accuracy of this new test. Now, assume that the model is composed of 1,000 types, or 10,000, or 100,000. And assume that instead of 300 data points from questions, one had 3,000 or 30,000, or 300,000 data points. What insights could one glean from that? It would go beyond determining your personality. It could predict what music you like, what leisure activities you prefer, what faults you have, and more. Well, no one is going to answer 300,000 questions. But you don't have to. Being online provides those data points to anyone who can gather them. They are gathered based on what Google searches you do, which web sites you visit, what podcasts you listen to, various just-for-fun surveys you take, the music you listen to on Youtube and/or spotify, the posts you like on Facebook, the people you follow on Twitter. And on and on. Not only does this allow for analysis of society and culture in general, and of subcultures in specific, but also of you personally. Chances are, someone with this information will know you better than most of your friends and family.
The statistical analysis is a little more advanced, but not by much. But who is going to develop a 100,000 types to correlate your data to? This is obviously beyond the ability of a person, or even a group of people. However, it is not beyond the ability of a computer using statistical analysis. In other words, this same statistical analysis can determine the model itself. But where does the data come from for the computer to create the model? It has to be fed into the computer so that the program can create its model. This is called "training" the program. As everyone knows, "garbage in, garbage out". So, the accuracy of the model is highly dependent upon the quality of the data it is trained on. As one might expect, if a model is trained on flame wars from twitter, the model's accuracy is going to be highly suspect. So one has to be selective about what data is used for training. Therein lies the problem. No one is completely unbiased. The best we humans can do is be aware of this fact and try to compensate for it (assuming we even recognize where our biases lay). But imagine someone who wanted the wrong conclusions to be reached by the AI. A Republican could feed in biased data to have the AI reach certain conclusions about Democrats that weren't true. Or vice versa. Catholics could do the same with Mormons, or China could do the same with South Korea, or... Well, you get the point.
This application of AI can do some unsuspected things, such as creating new music in the style of Beethoven. Again, this is done through nothing more than statistical analysis of the composer's thematic development, favored keys, chord progressions, and so forth. The results may be amazing, but the underlying AI is really just doing statistical correlations. In theory, a human could do the same, but it would be tedious to gather the data and perform the statistical analysis. The AI can do it in a few moments. But the AI isn't being creative.
Likewise, AI could be trained on the styles of certain authors (or of all authors) and be able to produce output in the style of, say, Benjamin Franklin or Robert Frost, or Shakespeare. You give it a topic, and it creates a poem in the style of Oscar Wilde, or a political treatise in the style of Thomas Jefferson. But the AI isn't likely to come up with novel insights to include in the poem or treatise.
As with all technology, AI can be used for good or ill. Creating a poem in the style of Poe may be nothing more than amusing, but using AI to diagnose disease is certainly coming and will be very useful. So, what is there to fear? The main point at which useful technology becomes harmful is when people over-rely on it. It is why Antilock Brake Systems have not reduced the number of car accidents - because people rely on the ABS and drive less cautiously when they have it available. Likewise, if we start to rely on the conclusions of AI without understanding what data it has been trained on, that will certainly be harmful. And then there is the spectre of bad actors using all the data about you subtly manipulate you without you even realizing it. All they need to do is provide a goal to the AI and it can affect every human that comes into contact with it. It need not even be a major influence to accomplish nefarious goals. Imagine shifting a close election by a few points through biased conclusions for those relying on information from the AI. There is evidence that this has been done in the past by simply by altering the results for certain Google searches. If we come to rely on AI, and we will, the influence will be even more subtle and with greater results. Because the AI will understand how you react to things it will personally customize your experience to bring you to the mental/emotional space that serves the purposes of those who control it.
How do we mitigate the dark side of AI? I don't think we can, at least in terms of the general population. People are sheep and they will be led by those who manipulate them through fear and misinformation. Any government attempt to correct this can only run afoul of people's freedoms. But how can you, dear reader, protect yourself? First, understand what motivates you. What do you fear? What are your deeply felt needs? People will use these things to manipulate you. AI will use them to manipulate you far better than the best con man to ever live could. Only by being on guard against appeals to these parts of your personality can you avoid being led where ever someone wants to lead you. Second, reduce the amount of information you give away. Don't answer surveys. Especially online surveys. Don't take these "fun" tests to determine what potato you are most like. Reduce your online use in general, but especially on your phones. Be careful about offering your opinion on things through "likes" or "thumbs up" or other voting mechanisms. Don't make your decisions based solely on on-line searches. Go out of your way to locate voices that are in the minority on important issues. Do not be amazed at AI - it is just conclusions derived from data, and you don't know what that data is. Don't allow yourself to rely on it. If I could summarize all of this it would be these two guiding principles: 1) know yourself, and 2) don't let AI know anything about you. But the sad fact of the matter is that unless you go completely off-grid, it is probably impossible to avoid AI learning something about you. So all you can do is try to minimize it, perhaps trick the AI by often providing false or misleading information (poisoning the well), and live by principles rather than by circumstances and emotion. And by all means, be suspicious of how technology is being used to manipulate and control.