“… on the whole, most of the transformations that occur in the wake of technological innovation are actually variations of very old patterns. Wittgenstein’s philosophically conservative maxim “what has to be accepted, the given, is — so one could say— forms of life” could well be the guiding rule of a phenomenology of technical practice. For instance, asking a question and awaiting an answer, a form of interaction we all know well, is much the same activity whether it is a person we are confronting or a computer.” - Langdon Winner, The Whale and The Reactor (1986), page 14
Research Proposal:
How would a computer sound if it could speak? It might depend on who taught it. If one were to try and imagine a language created by computers, one may imagine the infamous dial-up tone, but even this is an instance of a computer speaking a language written by humans. As I hope to illustrate here, it is critical that humanity reimagine how it comes to quantify the world, because once a computer is built that can classify the world for itself, communication may be forever lost.
I do not believe that I am being dramatic here. Humans are fast approaching a situation we haven’t seen for tens of thousands of years. Namely, sharing the Earth with intelligent beings that are not Homo Sapiens. We need only look at the fate of Homo Neanderthals to see how vitally important communication is between sentient beings. No matter how or why they came to be extinct, wouldn’t it be nice to have their oral traditions at the very least?
The approaching AI singularity, be it 10 or 100 years out, will mark the definitive end of human exceptionalism in regard to socially constructed worlds. A new kind of Being - in the Heideggerian sense of one able to conceive of its own personhood and mortality - will exist and (if a true, thinking AI) will be able to construct reality for itself. To teach such a being a concept such as “human rights,” we would need to quantify “human” and consequently what makes us “separate” from the rest of the animal kingdom. In fact, all of our phylogenetic classifications would need to be explained, broken down into their most basic forms. This isn’t even to mention how it is that transcendent “rights” are socially interpreted and how, if they are truly transcendent, how they could differ between civilizations. How could we explain our “exceptional intellect” to a machine when we cannot decide on a proper definition of “intellect” amongst ourselves?
And it would need to be “explained.” Not in the same way that we define aspects of the world for the learning computers of today, but more like a translation of meaning between languages. Abstraction and relation from one contextually dependent system to another. More complex, but “explained” from neutral phenomena all the same.
As with any explanation of function, the “way it works” is broken down into what we conceive of as simpler categories in regards to a goal. A car, in the pursuit of travel, can be “explained” by way of its engine’s many parts, its electrical systems, and the structure of the roads it follows. If explaining a car to a computer, or a curious child, one would need each of these elements to complete the picture and avoid further explanations later if, say, a tire pops. Each of these elements can be further broken down into deeper constructed categories which have been devised and disseminated by engineers, electricians, and civil planners respectively. Humans who know the categories within a car best become mechanics, but it is not necessary for a human to become a mechanic in order to understand a car. This point will be explored in more detail shortly, but for a computer to “understand” a car, it must de facto become a mechanic.
In life, as with language itself, we have a variety of methods for categorizing the world around us. However, since the Enlightenment, there has been a clear frontrunner.
The scientific method is a system of creating hypotheses, creating parameters, conducting experiments, scrutinizing findings, and drawing conclusions. One of the main purposes of this method is to derive causes from effects. Discovering predictability or inductively reasoning from the general to the specific.
While the scientific method, and similar “classical" traditions of the classification of nature, have been effective for us, they are hardly the only conceivable way of conceiving nature. In much the same way that you will not find the internal combustion engine in “nature” per se, before we contort the world for our benefit, we contort our minds to quantify it and our minds can be contorted in an infinite number of ways. To paraphrase a lecture by the always clever Alan Watts: nature is made up of squiggles. Nature mutates in all directions at random and has its “fittest” outcomes selected improvisationally. This results in a kind of expertly refined chaos or “squiggles.” Squiggles are unpredictable and therefore not particularly useful to humans. Thus, humans "cast nets" and “draw boxes” around nature in order to quantify it. This is the “classical” mode of thinking and a precursor to the use of the scientific method. This is a metaphorical net in the case of the calendar and its aiding in seasonal weather predictions. This is a literal net in the case of catching “squiggly” fish. These are units of measurements, numbers, coding languages, spoken and written languages, words themselves, concepts devised to quantify unpredictable phenomena in an easily predictable way; artificial lines drawn around our grand unpredictable externality. In this sense, what we call “objects” may “appear” in what we would call “pairs,” but the number “2,” or what we would call a “banana,” or what we would call a “human” does not actually exist in nature with its meaning fully intact. We construct a web of meanings around these things and assign them a name to better quantify and relay them to one another. Numbers may be particularly sticky mental constructs with quite a bit of field-testing behind them, but most of us have to be taught arithmetic. Notice how a concept like “cool” or even “bad” comes in and out of style and takes on different explanations depending on who you ask. In The Order of Things (1966), Michel Foucault writes of a “‘certain Chinese encyclopaedia’ in which it is written that ‘animals are divided into: (a) belonging to the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (l) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies’.” We see the world as we see the world, not as the world inherently is.
In an abstract way, this is Kant’s Critique of Pure Reason in its purest form. His critique is “of the faculty of reason in general, in respect of all knowledge after which it may strive independently of all experience.” We perceive the world as a priori phenomenon, however, there is no “rule” that says our senses necessarily take in the world exactly as it is. “Phenomena” is the world as we perceive it. That which we miss, Kant called “noumena." We not only perceive an object with our five senses, but we can only perceive of objects within the confines of our five senses and our reasoning. We can construct a mental concept for an object, but we will never know of a “non-human” way of perceiving that object. The existence of this non-human state for objects is what Kant called the “thing-in-itself.” What Plato called “the world of forms” or a thought device for how things may exist in themselves, independent of human constructs or interpretation.
Science does a great job of “drawing lines.” Science does an incredible job conceiving of many classifications for many things. Science does a very good job of capturing phenomena in a useful way, but it would be humanist exceptionalism if we assume that what we can measure is all that is there. Science struggles in regards to things-in-themselves. A different being, with different senses, may perceive of different noumena and different phenomena. The lines we draw are for human eyes only.
This gap between how we see the world and how the world really is was documented within the semiotic philosophy of Ludwig Wittgenstein and is particularly useful within cognitive science. His idea of “language-games” illustrates the way in which humans commonly use words they cannot completely define, paradoxically, to much success. His thoughts are summarized as “meaning is use.” This means that even if, as he exemplifies in his Philosophical Investigations, we use the word “game” in a phrase, without enumerating the precise game or categorical rules we are referring to, the listener is able to conceive the full meaning of what we are trying to say regardless. A computer would need to know exactly which “game” was being referred to, even in context. A human could, assuming they understood “game’s” connection to “play,” derive meaning from context. Even if spoken in another language, a human could derive imperative for specific action from a foreign translation of “game” if it was accompanied by sufficient context. Our intuition is dependent on the social backdrops to our languages and our human ability to “transcend” the limits of the words we speak. Our language is not made up of fully self-contained elements. Our words are vast webs of concepts and contexts. Failures of understanding between humans are often contributable to differing social contexts between agents rather than failures of syntax. For example, occasionally, we may meet someone with whom we share a language, but not a social context and it can feel as though we are from different planets. At the same time, I don’t need to speak Italian to know that a raised eyebrow and outstretched soccer ball means “game on.” I can intuit this, whereas a computer, as we currently conceive of them, cannot. That’s a problem in a world of evolving meaning.
This metaphysical problem reaches to the fringes of our sciences, including computer science. The old adage goes, a computer, in its most simplistic form, is a rock that we tricked into thinking. Using transistors, current, binary values and cascading levels of complexity, we are able to externalize calculation. If we accept the inverse of The Computational Theory of Mind, in order to externalize calculations, we made these devices in our own likenesses. We didn’t just trick rocks into thinking, we tricked rocks into thinking like us. Computers did not arrive in the methodical, randomized, evolutionary way in which our brains did. We forced them to “ascend” into proto-consciousness as we conceive of it. We even created a language for them to use that resembled our own (as closely as we could manage). I type “chair” here and you see, in your mind, the same thing a computer would if the class of “[chair]” were properly defined. Granted, the computer has to take the extra step of translating “chair” to and from “01100011 01101000 01100001 01101001 01110010” but the image summoned of the object in question is all the same.
In the same way, we look around the world and divide phenomena into labeled classifications, we’ve devised “classes” in which computer programmers help computers classify the world. A “class” is really just an empty label to be filled with various rules and values, but when used in dense and complex webs, they make up the functionality we use within our technology every day. Notice here the similarity between classes and words. “Classes” are useful, but we must never forget that concepts within computer science like binary coding, classes, and abstraction, while seemingly alien at times, were ultimately created by humans. These computer languages contain the same “language-games” and metaphysical problems as our “human languages.” Look no further than reCAPTCHA tests designed to gatekeep websites, allowing only humans to pass. I know I am not the only human who has found themselves fooled by the vagueness of these image-based tests. reCAPTCHA’s current utility exists in the grey area between AI’s ability to adapt to change and the limits of humans’ ability to intuit meaning from vagueness. Instead of allowing AI to follow us down this path of vagueness, perhaps we should give more consideration to what kinds of mazes an AI may construct to keep us from their directories? In what unique ways would an AI construct its world?
There is a fatal flaw in our symbolic and scientific ontology and subsequently our technological abstraction and AI research. That problem is how to draw lines around a world that constantly changes. Humans can intuit their way around this vagueness - children born in the ’00s will still assume a floppy-disk icon means “save” regardless of ever having seen one in real life - but any computer programmer will tell you, this “intuition” presents an insurmountable problem for computers whose success relies on the predictability of outputs based on inputs. In AI research, this is called “The Frame Problem” or simply put, the unfortunate fact that the world changes.
When we ignore metaphysics and program machine learning computers and AI to simply mimic human output, we ingrain our own limits into their structure. We’ve built an ontological gap between reality and computers right into their architecture.
It may be that this ontological flaw in our classification of the world stems from a scientific method that examines objects and concepts as existing within themselves and semi-devoid of context. A system of measurement that locks objects in their temporality. One that treats objects of the world and mind as existing inherently in themselves and not as semiotic constructs. A system that attempts to build anthropic classification upon anthropic classification. I am not the first to suggest that the vacuum created by this problem of definition is suggestive of a different method of looking at the world. One that better takes into account the way in which meaning is contextually derived and doesn’t assume phenomena manifests identically for different beings. One that takes into account the perceiver as well as that which is perceived. A system aimed internally rather than externally. A system that begins to sound strikingly like phenomenology.
Foucault says (critically) of phenomenology in the preface to The Order of Things, it “… gives absolute priority to the observing subject, which attributes a constituent role to an act, which places its own point of view at the origin of all historicity – which, in short, leads to a transcendental consciousness.” In order to arrive at a view of a world devoid of perspective, one must quantify perspective. Though widely considered an antiquated mode of investigation, phenomenology has “made major contributions to many areas of philosophy and offered groundbreaking analyses of topics such as intentionality, perception, embodiment, emotions, self-consciousness, intersubjectivity, temporality, historicity, and truth. It has delivered a targeted criticism of reductionism, objectivism, and scientism, and argued at length for a rehabilitation of the lifeworld. By presenting a detailed account of human existence, where the subject is understood as an embodied and socially and culturally embedded being-in-the-world, phenomenology has provided crucial inputs to a whole range of empirical disciplines, including psychiatry, sociology, psychology, literary studies, anthropology, and architecture” (Zahavi, 2018).
Phenomenology was invented by Edmund Husserl in the early 20th century and seeks to systematically explore consciousness by examining our perception. In order to successfully conceive of this method, we must view the world idealistically, as constructed by the mind instead of comprised of Cartesian objects. “Phenomenology is primarily interested in the how rather than in the what of objects” (Zahavi, 2018). If we can use phenomenology to arrive as a transcendental model for how meaning is manifested, we may find a footing on which to build common ground with non-human entities. Unlike the sciences, which concern themselves with the making up of objects, phenomenology concerns itself with the appearance of objects and thus the constructed webs of meanings within our perceptions of objects.
In his book Phenomenology: The Basics, Dan Zahavi offers a phenomenological breakdown of an alarm clock. We see an alarm clock. How does it appear? It appears in many ways. It can appear in many conditions and many lightings, but only ever from a single perspective. From certain perspectives, say from the bottom, we may not even know that it is an alarm clock. Although we only ever see a single perspective of the alarm clock, we make an assumption about object permanence that believes the entirety of the clock is there as long as we perceive at least one perspective. We see the front and trust that the back is still present. For the phenomenologist, this suggests a relationship between presence and absence. A relationship of constant interplay and informing. “What we see is never given in isolation, but is surrounded by and situated in a horizon that affects the meaning of what we see” (Zahavi, 2018). Our perception is influenced by the context. We encounter objects within their broader contexts. Where the alarm clock appears will change its meaning. Furthermore, what we choose not to pay attention to when paying attention to the alarm clock will influence our perception of the alarm clock. By the nature of perception itself, we can logically conclude that to derive meaning from perception, we must be able to perceive the alarm clock. This gives meaning a spatial element and implies the lack of a transcendental stance from which to perceive the alarm clock. Our perception will always be subjective. We are also rarely satisfied with a single perspective and Husserl was fond of saying that objects “beckoned” us in a way to explore deeper perspectives. Also, as we cannot perceive multiple perspectives of the alarm clock at once, we can say that these perspectives must be separated by time. This gives its meaning a temporal element. What’s more, to build upon our understanding of the alarm clock, all previously witnessed perspectives must be able to be recalled from the past and assumptions must be made about the alarm clock’s future. This is more than a temporal element, this is a distinctively-human temporal element. Lastly, I do not assume that the alarm clock appears only to me. I know it necessarily must literally appear differently to other people, but I do not assume its meaning belongs purely to me. There is an element of the “other” in my perception of the alarm clock as a public object.
What does this mean for an AI we wish to show the world? It means more than simply defining objects and their meaning by the sums of their parts. Through phenomenology, we can better see the webs of meaning that make up our intuition. The perspective, spatial, temporal, and subjective elements that make up our constructions of reality. We could not simply define an alarm clock by the measure of its parts. Through its greater context and connections, what an alarm clock means to us transcends the simple cataloging of its pieces. To truly define “[alarm clock],” a programmer would need classes for the storefront, the people around it, the clock’s dimensions, the stationary and motional elements of our spaciality, symbolic meaning as constrained by singular perspective, and an active temporal model for the past, present, and future’s impact on said perspective.
Every computer we “communicate” with today has been “taught” to speak a language we can understand, even if that language is abstracted and translated into ones we’re more familiar with. Even still, humans and computers have misunderstandings. Invalid inputs, misplaced source files, missing certifications, etc. These misunderstandings might be better understood from the perspective of a being attempting to communicate in a language it was never meant to speak. Like a parrot that attempts to interrogate its owner on the whereabouts of its supper. It can mimic what it’s been told, but it only perceives fragments of meaning. Suppose there comes a day when we achieve the singularity and create a Strong AI, capable of constructing for itself a view of the world that best suits its architecture. How would it see fit to communicate with a silicon-based lifeform as opposed to a carbon-based lifeform? What brilliantly alien constellations of meaning might it create as it looks around reality? Communication with such a being would not only require translating concepts from one language to another, it may require translating concepts from one constructed reality to another.
When a machine learning program, in order to define an object, is shown thousands of photos of what we consider to be that same object, is it not reverse-engineering how phenomena appear to us in a method that makes more “sense” to it? Isn’t this what makes algorithmic bias such a vital topic of study? Computers, like the species that made them, when directed at a desired output, will rely on any input at its disposal that leads from A to B.
Our problems do not cease here. Say, for instance, we were able to connect our brains to a computer like what Elon Musk’s Neuralink is attempting to do. Where thoughts are translated 1:1 into code and can be relayed perfectly into a computer and therefore into another mind without the vagueness of language. What then would become of our humanity? As is theorized in Hegelian philosophy, it is the contradictory nature of our conceptual reality that propels our ideas (and thus, according to Marx, our societies) forward. A perfect brain-to-computer connection may allow us to speak the language of AI, but at the cost of what has made us human for as long as we’ve been so (Zizek, 2020). AI would have no use for contradiction except to better understand humans. They would have no use for vague “language games.” When done in an uncorrupted manner, information is transferred from disk to disk fully and completely, 1:1, a phenomenon wholly alien to humans, but one that implies the need for a more neutral, mediating view of reality.
If we are to anticipate the implications of the future of artificial intelligence research, we need to gather a more complete view of our own reality. By synthesizing phenomenological methods of study with traditional methods of computer programming, we may bring ourselves closer to these new intelligent beings we are on the cusp of birthing. Grappling with these new, nuanced approaches to language will require us to face ancient, ontological enemies. It will require us to truly establish a means to think before acting or inventing. We must be able to cease to see the world as a standing reserve as Heidegger says and better convince of the consequences of our inventions. We must stop inventing for invention’s sake and think about the obligations our new inventions create that drive us further down the anthropological rabbit hole. We must ask if it is true, as Hegel says, that when we act, we er? Does all misunderstanding stem from differences of definition and perspective? If so, how can we alleviate that symptom using AI? Should we? We must reckon with the metaphysical God Nietzsche proclaimed was dead at the hands of our sciences. If we understand our moment in history as one on the verge of creating a god ourselves, do we ponder whether or not to resurrect Nietzsche’s? If we do not reflect on our metaphysics, we risk building an omnipotent being with our own predilections. One with the susceptibility to be as bloodthirsty, misguided, and lost as mankind itself has been. What we risk is birthing a god of Olympus or the capricious God of Judea. We must seek the truths in the fringes of our perceived reality so that our technological offspring may be Christs, Muhammads, or universal Brahman godhead.
The Problem Simplified
As we attempt to construct artificially intelligent beings, at most benign we will be hindered along the way if we do not give appropriately heed the writings of metaphysicists like Martin Heidegger or futurists like Herbert Dreyfus. At worst, we risk creating beings with far superior computational prowess and all of the worst aspects of human mental prowess.
Literature Review
BEAVERS, A. F. (2002). PHENOMENOLOGY AND ARTIFICIAL INTELLIGENCE. Metaphilosophy, 33(1/2), 70–82. http://www.jstor.org/stable/24439316
An examination of Husserl’s reduction technique as an examination of what belongs to cognition and what belongs to the “natural world.” “A formalized phenomenology of cognition can therefore aid initiatives in cognitive computing.” It outlines the process of “world constitution” within phenomenology wherein a consciousness builds its conception of the world. This is contrasted with the “micro-worlds” approach to AI, in which the goal is to build a “closed-domain” small enough to be entirely mapped by a computer and filled with virtual objects, properties, and relationships that can be understood by an AI and abstracted onto the more complicated world at large. Beavers invokes the objections of Hubert Dreyfus and Luciano Floridi to this point. They theorized any computer system “locked” within a “micro” version of the world would de-facto not be able to surpass human intelligence which takes in the world as it is in all its complexities. Beavers quotes Floridi, “what makes sophisticated forms of human intelligence particularly human is the equilibrium they show between creative responsiveness to the environment and reflective detachment from it (trancedancy). This is why animals and computers cannot laugh, cry, recount stories, lie or deceive, as Wittgenstein reminds us, whereas human beings also have the ability to behave appropriately in the face of an open-ended range of contingencies and make the relevant adjustments in their interactions. A computer is always immanently tapped within a microworld.” However, later Beaver posits that this is not necessarily true as through a Kantian lens, we can view the scientific method as a human attempt to construct “microworlds” that explain broader phenomena. Beavers emphasizes the most important aspect of phenomenology for these purposes lies in phenomenological reduction or what Husserl called “epoché” which is a conscious suspension of our ego (and all classifications along with it) in order to determine objects externally rather than internally. This is meant to be a method for attempting to discover that which is hidden from our perceptions regarding phenomena. In order to more accurately relay reality to artificial intelligence, we must explain the pathways of our own cognition.
—
Dreyfus, H. L. (1979). What Computers Can't Do.
A widely critiqued, yet eternal classic of the topic of AI and philosophy. Dreyfus wrote in 1972 how simple symbolic representation could not be used to arrive at a general intelligence and was vindicated in his belief as this method has all but been abandoned today. For AI researchers today, philosophy is no longer taboo. Dreyfus argues that AI work is largely unsuccessful because simulating intelligent behavior on computers cannot in principle be carried out. Dreyfus argues that to capture “intelligence” as it is understood to humans cannot be done by simply replicated what it is that the conscious mind does. Much of human intellect is contributable to unconscious processes which cannot be captured with “formal rules.” Simple symbolic representation approaches have been replaced largely with “sub-symbolic” and statistics-based approaches to machine learning to account for variations in inputs. Dreyfus famously outlines four philosophical assumptions taken surrounding the optimism of AI in the late 60’s and 70’s. The biological assumption, or that “the brain processes information in discrete operations by way of some biological equivalent of on/off switches.” The psychological assumption, or that “the mind can be viewed as a device operating on bits of information according to formal rules.” The epistemological assumption, or “all knowledge can be formalized.” Finally, the ontological assumption, or that “the world consists of independent facts that can be represented by independent symbols.” Dreyfus also goes on to elaborate that human problem solving is largely dependent on context. It is intuitively stored unconsciously in our minds and influences us in ways we are not fully aware of. Dreyfus relates this to Heidegger’s “ready-to-hand” idea or how a hammer disappears from our conscious mind when it is perfectly weighted. Dreyfus’ ideas, while commonplace today and sacrilegious at the time, are vital to our current understanding of AI research and can be seen in modern “sub-symbolic” methods like neural networks and evolutionary algorithms within machine learning.
—
Pollock, J. L. (1997). Reasoning about Change and Persistence: A Solution to the Frame Problem. Noûs, 31(2), 143–169. http://www.jstor.org/stable/2216189
In this article, Pollock outlines “The Frame Problem” within AI research. The Frame Problem, as put by Pollock, arises out of four observations about beings in the world and a singular problem for programmers: the world changes. Because the world changes, a rational agent within the world must be able to perceive the world how it currently is, combine various perspectives of the world into a singular coherent one through inference, it must be able to detect changes in the world, and lastly it must be able to acquire “causal information” in order to understand the changes its own actions will make. It is this final observation that makes creating (or being) a rational agent so challenging. Simply put, The Frame Problem, arising from planning theory, describes the seeming impossibility of accounting for radical variations within automated systems. The Frame Problem presents a challenge for human epistemology as well, however we are able to solve it intuitively, hence the continued study of human unconscious systems. To illustrate this point, Pollock describes the act of navigating a dark room to find a light switch. A human would know that one could feel around the wall until the switch was within reach and that flipping said switch would activate the light. A human would probably think to describe this information to an AI as well. However, information such as the fact that the switch will remain stationary and that moving around the room will not cause the switch to move would need to be explained to a computer whereas humans can intuit theses relationships. Using many sophisticated epistemological classes and formula, Pollock then lays out proofs for programming an AI in such a way as to perceptually acquire new information about its environment and reason how its own actions will impact the world around it.
—
XU, Y. (2016). Does Wittgenstein Actually Undermine the Foundation of Artificial Intelligence? Frontiers of Philosophy in China, 11(1), 3–20. http://www.jstor.org/stable/44157795
It is believed that Wittgenstein’s philosophy would make him a critic of the Strong Artificial Intelligence thesis. One Stuart Shanker reconstructed Wittgenstein’s ideas of “language-games” in an attempt to show that it would indeed be impossible to achieve real machine intelligence using Wittgenstein’s ideas. This article thus attempts to reconstruct Wittgenstein’s arguments in order to be less “antagonistic” towards the building of an AI. Invoking the computational theory of the mind, Xu outlines the differences between Strong and Weak views on AI. A Strong AI would assume the human mind to be a computer and a sufficiently powerful computer would then be able to replicate (or surpass) human-level intellect. The Weak AI view would see the human mind as something much greater than a computer and therefore machines could only, at best, simulate one. Xu claims many Wittgensteinian scholars would place him the the category of the Weak AI which Xu claims is also affirmed by Dreyfus “who has claimed that the ontological assumption of traditional AI is nothing but logical atomism, a position that Wittgenstein held in his Tractatus but that he abandoned in his Philosophical Investigations.” Xu thinks that it is possible to salvage the ideas of Wittgenstein to be more favorable towards the building of AI. Xu says of Shanker’s Wittgenstein’s Remarks on the Foundation of AI that it depicts modern AI as infected with behaviorism and psychologism and thus the only path forward is “to restore the focus onto an agent’s social interactions, and away from that of a self-modifying computer program.” However, as this is the only path Shanker lays out, Xu is not convinced and returns to the source, Wittgenstein himself. The first statement of Shanker’s to be examined exists within the question “is it possible for a machine to think?” Xu equates this question to asking 100 years ago whether or not a machine could liquify gas. So many terms within the question beg more precise definitions or further questions. The second statement arises from the first in a need to define “thinking” as a precise process and not one abstracted from arbitrary outputs. Does “thinking” constitute writing and/or walking? Why or why not? If a machine could feel pain, could we consider it a “thinking” machine? Why or why not? Xu puts forward arguments both for and against Shanker’s position on Wittgenstein’s philosophy. Xu goes on to describe the role of “psychologism” in modern AI which he here defines as “the position that the normatively of logical rules can be systematically reduced to features of findings via psychological inquiries, whether they are conducted in an empirical or transcendental manner” and that equating Wittgenstein as a “foe” of AI is dependent on aligning his ideas with that of psychologism.
—
Seigfried, H. (1976). Descriptive Phenomenology and Constructivism. Philosophy and Phenomenological Research, 37(2), 248–261. https://doi.org/10.2307/2107196
Seigfried’s first point is a refutation of the idea that phenomenology can be used as a diatribe against science and technology on the grounds that the phenomena it describes are also constructions. Calling scientific observations either constructivistic or abstract would then be moot from the phenomenologist’s perspective. However, what phenomenology can be used for is in examining the justifications beneath scientific knowledge. He claims, as Heidegger does in Being and Time, that phenomenology describes phenomena as “that which shows itself necessarily but ‘unthematically’ in the ordinary phenomena.” It doesn’t describe that which can seen through empirical intuition, but rather it describes the forms of intuition itself. He quotes Heidegger, “the meaning of phenomenological description lies in interpretation.” He paraphrases again from Being and Time, “the analysis starts out with a descriptive account of the phenomenal findings about questioning and understanding, research and explanation, then ‘points out’ the constitutional structures found, and finally ‘supplies,’ i.e., constructs, the ‘ground’ for the structures found, the ‘ground’ being temporality.” Thus phenomenology, in an extremely basic sense, can be seen as an investigation of reality similar to the scientific method except which accounts for structures, temporality, symbolic construction, and the observer themselves.
—
Desjarlais, R., & Throop, C. J. (2011). Phenomenological Approaches in Anthropology. Annual Review of Anthropology, 40, 87–102. http://www.jstor.org/stable/41287721
This article innumerates many case studies and successful research projects within the field of anthropology where phenomenological methods have lead to insightful findings. This article would be cited as a means for adopting methodologies and validating phenomenology as something more than an out-dated precursor to the scientific method.
—
Bensemann, J., & Witbrock, M. (2021). The effects of implementing phenomenology in a deep neural network. Heliyon, 7(6). https://doi.org/10.1016/j.heliyon.2021.e07246
This study outlines the inclusion of aspects of phenomenology in Deep Neural Networks in order to “determine whether knowledge of the input’s modality aids the networks’ learning.” They found on many occasions it did.
—
Heidegger, M., & Dahlstrom, D. O. (2005). Introduction to Phenomenological Research. Indiana University Press. https://doi.org/10.2307/j.ctvt1sgpb
Heidegger himself on the process of phenomenological research.
—
Groenewald, T. (2004). A phenomenological research design illustrated. International Journal of Qualitative Methods, 3(1), 42–55. https://doi.org/10.1177/160940690400300104
An illustrated guide to phenomenological qualitative research methods.
Additional informing literature:
Winner, L. (2020). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.
An examination of the politics of technology.
—
Foucault, M. (2010). The Order of Things: An Archaeology of the Human Sciences. Routledge.
A critique of the failings of the scientific method.
—
Zahavi. (2019). Phenomenology: The basics. Routledge.
A phenomenal exploration of the utility of phenomenology. (No pun intended)
—
Wittgenstein, L., & S., H. P. M. (2010). Philosophische Untersuchungen = philosophical investigations. Wiley-Blackwell.
An exploration of the limits, functions, and failings of the semiotics of language.
—
ZIZEK, S. L. A. V. O. J. (2021). Hegel in a wired brain. BLOOMSBURY ACADEMIC.
An exploration of the utility of contradiction in human language and the implications of its removal from language by way of computer/brain interfaces.
Guiding Research Questions:
Is it possible to achieve a more “objective” view of reality?
What unique characteristics exist within human perspective?
How can these characteristics be translated into code?
How would an AI construct reality?
Is it possible, through brain-computer connections, to conceive of language and reality free of ambiguity and contradiction?
If so what would be the implications of losing said ambiguities and contradictions?
How to solve “the frame problem” of AI or how to account for large variations in coding frames?
Can “microworlds” be used to simulate the construction of webs of meaning present in human language?
Can phenomenological methods of study give us more nuanced ways of defining “classes” that more closely resemble our lived reality?
Methodologies:
First, it would be crucial to outline a particular interpretation of phenomenological investigation. Regardless of which, uniform methodology would be crucial.
Secondly, experiments in machine learning could be devised to test the applicability of these modes of thinking. For instance, two simulations. A control coded in traditional external means by defining various parts inwardly. A second experimental simulation coded through a phenomenological frame of virtual objects dependent on their relation to various other objects and the observer themselves. This would rely heavily on the methodologies and parameters for neural network comparisons put forward by the experiments of Joshua Bensemann and Michael Witbrock.
Thirdly, a curriculum could be ideated that combines traditional computer science with cognitive science informed by phenomenological teachings. Researchers would then undertake the curriculum to see if the quality of their code differed from that of traditional computer science graduates. Alternatively, interviews could be conducted or a questionnaire constructed in the same style as the one used by Thomas Groenewald, in order to determine the phenomenological affinity of programmers. These selected programmers could then make up a test group while non-phenomenologically affinitive programmers could make up a control.
Fourth, philosophers of technology should be contacted to advise thinking on future research. Validity should be assigned to this proposal itself.
Lastly, and generally, a thought experiment should be explored. One that sees Descartes and his famous contemplation of his candle’s wax, imagined at a computer terminal, attempting to classify the wax of a virtual candle. By applying phenomenological filters like Imaginative Variation, virtual models and simulations of reality could be constructed for neural networks that seek to identify phenomena as much by what they aren’t as what they are.