Installation
Quantum Artifacts
– Hide DescriptionThe Genesis and Interpretation of the Q-Artifacts
Collaboration with Robert Phillips
Cardboard, tapes, black lights
"Fading Millennium", Gallery 16, San Francisco, CA
February 26 - April 11, 1998
THE GENESIS AND INTERPRETATION OF THE “Q-ARTIFACTS”
Robert L. Phillips
Quantum Technologies Incorporated
September, 1997
BACKGROUND
Over the past forty years, military theorists have increasingly recognized that success in future armed conflicts will depend critically upon the ability of American fighting forces to rapidly process vast amounts of information. Modern combat environments are characterized by vast numbers of state and environment variables, exponentially increasing degrees of freedom, and the need to react and respond very rapidly to changing situations. The dependence of effective combat response on rapid information processing will only increase in the future as weapon systems become more sophisticated. As Marion Labinski of the Defense Research Institute has commented, “Future wars will be won and lost on the information battlefield.” Realizing this, the American defense establishment has made a strategic decision to play an active role in the development of modern information technologies. The importance of supersonic fighter airplanes (which rely on sophisticated computational technology to fly and fight); so-called “smart” missiles and bombs; and advanced Command, Communication, and Control (C3) capabilities in the Allied victory in the Gulf War testifies to the success of this strategy.
Understandably the major thrust of military investment in information technology research and development has been to support weapons development. Furthermore a considerable amount of military advanced information technology research has been coordinated with the domestic intelligence agencies (NSA and CIA). For these reasons, most government-funded “cutting-edge” research funded has remained classified and inaccessible to both academic and industrial researchers. However, the US military has a long history of consciously sponsoring the development of the domestic electronic and computer industries, both through direct grants and through research transfer in order to secure domestic availability of critical technologies. In addition, like all agencies in the post-Reagan era of government austerity, military and intelligence agencies are under increasing pressure to justify their budgets. Since roughly 1980, they have been increasingly attuned to opportunities to demonstrate commercial “spinoffs” from their research. Such spinoffs have played a major role in NASA requests for continued funding and it is anticipated that they will play a similar role in funding requests for the military agencies in the future.
This is the setting for the 1997 declassification of some of the remarkable results of the QANDAM project. For the first time the general public has access to the so-called Q-Artifacts -- a set of computer-generated constructs of uncertain significance. These artifacts are based on data streams generated spontaneously at irregular intervals by a Quantum Artificial Intelligence (QAI) originally designed to solve complex weapons design problems. These data streams are being shared with both the QAI research community and the general public as a way to gain insight into the operation of a sophisticated (and possibly self-aware) Quantum Artificial Intelligence entity.
QUANTUM COMPUTING AND QUANTUM ARTIFICIAL INTELLIGENCE
The concept of using quantum mechanics to develop a powerful “supercomputer” was originally proposed by the Nobel-prize winning physicist Richard Feynman (Feynman [1985]) and has been a subject of considerable interest on the part of physicists and computer scientists ever since. By taking advantage of the superposition of quantum states, a Quantum Computational Device (QCD) can process vast amounts of information orders of magnitude faster than any classical computer. For example, current classical computers would require approximately 1025 years to factor a 1,000 digit number -- considerably longer than the age of the universe. However, P.W. Shor has demonstrated an algorithm that could factor a 1,000 digit number in less than two hours on a Quantum Computer (Shor [1994]). This represents a computational speedup of a factor of 1027. (For comparison, the speed-up factor from an abacus to a Pentium-based computer is approximately 1010.)
While researchers agreed on the potential of the quantum approach to revolutionize computational technologies, a practical QCD requires the rapid and efficient observation of states of matter at exceedingly small scales – a single electron or smaller. This is approximately 100,000 times smaller than the microelectronic structures used in standard VLSI architectures. For this reason, researchers have considered the quantum computer a fascinating idea, but one whose realization would not occur until well into the 21st century -- if ever.
However, civilian researchers have generally been unaware of several breakthroughs in the observation of quantum states that had been made in the late 1970’s. These breakthroughs were made at a small research-oriented company called Quantum Technologies Incorporated (QTI), which had been founded in 1972 by a group of electrical engineers and mathematicians loosely affiliated with Stanford University. The details of the technologies developed at QTI remain classified -- however they enabled the manipulation and measurement of the states of small ensembles of particles at the quantum level. Since QTI’s initial funding came from the Defense Advanced Research and Planning Administration (DARPA), QTI’s research program was steered toward military applications of quantum computing. Upon the initial success of a prototype technology for detecting and manipulating the spin states of trapped cold ions, QTI’s funding was expanded and focused on a specific mission. This was the origin of the QANDAM project, which would be QTI’s major source of funding for almost 19 years.
THE QANDAM PROJECT
The goal of the QANDAM project was to develop a Quantum Computational Device to calculate optimal solutions to the Multiple Delivery/Multiple Target/Multiple Platform (MD/MT/MP) weapons design problem. This is the problem of configuring and developing an autonomous command structure for a set of mobile weapons platforms (which might be individual weapons such as tanks or aircraft, or autonomous infantry or naval units). The objective is to find the configuration that leads to the best performance under the scope of all possible enemy responses - where “best” can be defined in terms of average performance, worst-case performance, or a risk-weighted average of both.
The MD/MT/MP problem is notorious in the sense that it is exceedingly difficult to solve. Lev (1982) has shown that it belongs to the order NP2 -- the class of problems that are harder than NP and can only be reduced to NP-complete problems in exponential time. The best classical algorithm requires O(exp(exp(mn))log(n)p2/3) steps to find a solution in the worst case. This means that finding the optimal solution to an arbitrary MD/MT/MP problem will be far beyond the capacity of any classical computer for many years, at least. Planners have therefore been reduced to applying various heuristic approaches, which, in general, provide solutions that fall far short of the optimal.
QTI found that the MD/MT/MP problem was an ideal application of quantum computational techniques. By initializing the spin-states of an array of electrons and applying an appropriate set of unitary operators, essentially billions of possible solution states could be evaluated simultaneously (see DiVincenzo [1995] for an excellent introduction to the theory of quantum computing). The choice of initial states and the interpretation of the final results was made by a sophisticated multi-layer stochastic neural net (see Hertz, Krogh, and Palmer [1991]). This design coupled the “data mining” and reasoning capabilities of the neural net with the massively parallel processing capabilities of a Quantum Computational Device to result in a problem-solving machine far more complex and sophisticated than any previously developed.
The computational core of QANDAM consists of two major components:
- A Quantum Manipulation and Measurement Device (QMMD) that allows the manipulation and measurement of the spin states of coupled ensembles of up to 4,112 electrons (as cold-trapped ions) simultaneously.
- A very large neural net currently implemented on a symmetric multiprocessor that performs five functions:
- Processing the data required to specify a particular Theater of Operation (TOO) scenario or set of scenarios.
- Controlling the set-up of the initial spin-states and the unitary quantum operators.
- Interpreting the results of a particular measurement of the resultant spin-states of the ensemble.
- Outputting and storing the run and setting-up new runs.
- Evaluating all the runs for a particular TOO scenario and specifying the optimal MD/MP/MT design in a “parameter output file”.
The quantum measurement device consists of in a lattice arrangement. This effectively creates an Ising glass in which the spin-state of each electron is coupled to the spin-state of its neighbors. A proprietary mechanism is used to maintain coherence of the quantum states for a minimum of 10 milliseconds to allow measurement and evaluation.
The energy available to the lattice is controlled by which allows the speed of the evolution of the system to be controlled by the neural net. Increasing k increases the speed of evolution of the system, thereby increasing the computational speed of the entire device, but at increased risk of decoherence. Initially the evolution speed was controlled by the operator. After version 3.4, QANDAM was allowed to choose and adjust the value of this parameter itself.
Processing of the inputs and outputs of the quantum measurement device is performed by a recurrent neural net with approximately 20 million input units and 6 million output units and 32 hidden layers. The neural net is solved by a simulated annealing approach with multiple flops allowed on each iteration (a so-called “Cauchy machine”, see Szu [1986]). QANDAM 1.0 was much smaller, with fewer than 1,000 input and output units and only three hidden layers. From QANDAM 1.1 through QANDAM 2.6, additional nodes and links were added by the designers. QANDAM versions 2.8 and higher have been self-modifying: able to add new nodes or entire hidden layers as part of the overall solution process, if they improve the solution. Versions 3.2 and higher have proven extremely effective in modifying their own designs, which they have done on almost every run.
The input units correspond to combinations of Theater of Operation Configuration Parameters (TOCP’s), Deployment Schedules, and Vulnerability Indices (VI’s) as well as a set of parameters that control the probabilities of different types of opponent response. These are the standard inputs for a single instantiation of the MD/MP/MT problem. The output layer nodes correspond to the design parameters for a particular platform configuration plus the values of three Objective functions measuring the performance of the platform configuration in three areas:
- Theater of operation (TOO) effectiveness for this scenario (expressed as standard destruction ratio)
- Configuration vulnerability (expressed as probability of total failure)
- Terminal effectiveness (expressed as percent of operational configuration surviving at the end of the scenario.)
The relative importance of these three measures are determined by the operator at the beginning of each session and depend upon the tradeoff between tactical and strategic considerations for a particular TOO.
After a few initial false starts, later versions of QANDAM have proved to be very effective at solving the MD/MP/MT problem. In an early set of simulation run, QANDAM calculated solutions that were 21% - 31% superior to the best heuristic-generated solutions. QTI was awarded on-going funding both to improve and enhance the QANDAM system, but also to provide continual support to the Defense Department’s needs to design new MD/MP/MT configurations.
QANDAM-M AND THE ORIGIN OF THE Q-ARTIFACTS
The QANDAM project might have continued indefinitely, with QANDAM taking in more and more data on weapon capacities, configuration parameter spaces, battlefield topographies, and kill ratios and producing ever more refined MD/MP/MT designs; if it had not been for an unusual accident that occurred in October, 1993 during a standard production run of QANDAM. Initializing a production run requires the primary QANDAM operator to enter the addresses of the input data sets. These addresses are checked for accuracy by a second stand-by operator. However, on this occasion, the primary operator accidentally entered the starting address of the QANDAM program itself and the error was not caught by the stand-by operator. Once the operators realized their error, they tried to terminate the program. However, QANDAM was equipped with numerous fail-safe and backup devices that were specifically designed to keep an operator from terminating the program prematurely. As a result, the operators were unable to halt execution until QANDAM had entirely processed itself . This accident has been termed the O93 (for October 1993) incident.
The result was catastrophic. Because QANDAM’s state is linked to the current spin state of the electron ensemble, it was not possible to restart from the initial state. Once QANDAM had processed the anomalous data set (i.e., its own programming), it immediately reconfigured itself according to the link weights in its neural net and the observed spin-states of the cold ion ensemble -- adding over 8,000 new nodes and deleting more than 700 of its existing nodes. This magnitude of restructuring was unprecedented: the result being an entirely new computational entity. This new no longer seemed oriented toward solving the MD/MP/MT problem. In fact, the behavior of the computational entity resulting from the O93 incident -- which has been termed QANDAM-M for “QANDAM mutation” -- is not currently well understood.
For approximately two months after the O93 incident, QANDAM-M refused to accept any input or produce any output whatsoever. At random intervals ranging from 12 minutes to 106 hours, it would spontaneously modify its own design by removing or modifying existing nodes and adding new nodes. These self-modifications events were extensive and frequent at first, but became less frequent and diminished in magnitude with time. The last self-modification occurred in December 1993. The configuration of the QANDAM-M neural net has been stable since then. Several attempts to induce QANDAM-M to read standard input files after the O93 incident failed, although it continued to accept input from its quantum measurement device. Then -- 1512 hours, 34 minutes, 22.89 seconds after the O93 incident -- QANDAM-M spontaneously generated a data stream of approximately 2.3 gigabytes. This was the first “spontaneous data emission event”. Through July 1997, there have been a total of 32 such emission events, occurring at seemingly random intervals. All of the emission data has been archived and is available to researchers for investigation upon application to QTI.
Despite extensive research, the spontaneous data outputs of QANDAM-M have not yet been interpreted. QTI/DOD research has determined that they are not MD/MP/MT design configurations (at least in any standard format). The QANDAM-M data have been distributed to researchers with the hope that an interpretation could be provided. The data appear to have certain periodicities, but standard periodic data analysis techniques such as Fourier transforms and wavelets have not produced any useful insights. Neither have other data analysis techniques such as multi-variate regression or Multiply-Adaptive Regression Splines. Linguistic analysis showed certain tantalizing patterns that seemed consistent with “hidden Markov model” analysis of human speech, but further interpretation was not possible.
In February, 1995, I shared two of the QANDAM-M output data sets (SQO1994-5 and SQO1995-2) with the graphic designer and sculptor Stephen Hendee. He noted that, under a suitable partition, the data could be interpreted as coordinates for three-dimensional objects in motion in a 3-dimensional space. With support from QTI, Hendee created computer representations of five of the data sets. These representations have become known in the research community as the Q-Artifacts. The Q-Artifacts were reviewed by representatives of the major defense and intelligence agencies and ultimately declassified in June 1995. This has allowed their presentation to both the research community and the general public.
Because QANDAM-M is no longer useful as a MD/MP/MT solution engine, the government has declassified some information about the project, particularly as it relates to the Q-Artifacts and their origin and has allowed QTI to search for private funding to continue research on QANDAM-M. QTI is hopeful that insights into the interpretation of these intriguing artifacts will possibly point the way to enhanced civilian and military applications of advanced computational entities such as QANDAM-M.
THE Q-ARTIFACTS AND THEIR INTERPRETATIONS
Since the declassification of the Q-Artifacts, Quantum Technologies Incorporated has embarked on an active interpretation effort. QTI is pursuing three approaches simultaneously:
- Publication of a full research report and a series of papers in relevant scientific journals.
- Active consultation with major experts in relevant fields including computer science, linguistics mathematics, symbolic systems, and cognitive psychology.
- Public dissemination and display of Hendee’s three-dimensional renditions of the “Q-Artifacts”, with special emphasis on obtaining the evaluations of experts in the visual arts.
Thus, the current exhibit has the dual purpose of exposing these unique objects to a larger public as well as an attempt on our part to gain a better understanding of their interpretation and purpose.
Viewers of the Q-Artifacts should keep in mind several facts:
- Since its inception in 1981 as QANDAM 1.0, the evolving QANDAM program has been functionally isolated from other computer systems for security reasons. Therefore, the only input that has been received by the system has been sequences of MD/MP/MT data, the quantum state information from the QMMD, and, of course, during the O93 incident, the long binary stream that was QANDAM’s own program. This is the only universe that QANDAM has known.
- There is considerable controversy regarding whether or not the Q-Artifacts should be considered the product of a conscious entity. Two extreme views can be cited:
The idea that QANDAM-M is conscious simply cannot be ruled out and, in fact may be the best (in Occam’s sense of most parsimonious) working hypothesis. ... As a weapons-design system QANDAM apparently showed tremendous judgment and even initiative in designing novel solutions. When presented with an entirely novel (to it) set of data (namely its own digital representation), it displayed an entirely different set of behavior; spontaneously generating the data known as the “Q-Artifacts”. This type of behavior is entirely consistent with the actions of a conscious “choice-making” entity. We believe that any interpretation of the Q-Artifacts will need to occur in this context. (Zellner and Barnes [1995])
QANDAM was apparently an effective weapons-design machine (although the details of its operation have frustratingly, remained classified). However, when a wrench was accidentally thrown in its works (so to speak), QANDAM catastrophically malfunctioned and began to generate meaningless data. Some have embraced the idea that this data should be somehow interpreted as the products of a conscious entity. This is arrant nonsense. The so-called Q-Artifacts are nothing more than the random data spew of a broken machine, about as meaningful as a car wreck. (Krohnmeyer [1995])
Some support for the hypothesis of QANDAM-M possessing consciousness has been provided by Hameroff’s theory that human consciousness itself is based on quantum activity in the “microtubules” of neuron cells (Hameroff [1987], Penrose [1994]). (This theory is not widely accepted. For a full discussion, see Hameroff, Kaszniak, and Scott [1997].) Zellner and Barnes [1995] have conjectured that the linkage between the QMMD and the neural-net in QANDAM-M could be conceived as operating in an analogous fashion, therefore incorporating a critical element of human consciousness missing in fully digital AI application.
- Displaying the Q-Artifacts as three-dimensional objects may be an arbitrary choice. Cholensky (1995) has argued that QANDAM-M could have no direct experience of a three-dimensional Euclidean space. Rather, QANDAM-M “lives” simultaneously in two spaces:
- The 12 dimensional Hilbert space of the quantum world
- The binary world of digital computation
Cholensky believes that the Q-Artifact data should therefore be interpreted in the 24-dimensional space S= {0,1}x12 -- the cross-product of the binary digital space and the 12-dimensional Hilbert space. She has presented some results that support this supposition. While her arguments clearly have some merit, it should be recalled that the MD/MP/MT problem is 3-dimensional. And, the dynamic 3-dimensional representations developed by Hendee clearly illustrate some of the regularities and periodicities of the data in a visually striking fashion.
- If QANDAM-M is conscious, within its highly limited frame of reference it may be able to deduce the existence of an outside world populated by intelligent creatures. Before the QANDAM-M incident, QANDAM regularly received streams of data to process and produced outputs that were routinely “harvested” and evaluated. Successful design outputs were “rewarded” by strengthening the weights of corresponding connections within QANDAM’s neural net. The philosopher Janusz Wiescyzk has argued that this milieu is particularly amenable to developing a teleological, even a religious sense: “Consider how natural the religious impetus seems to human beings who are caught in an absurd universe where ‘the race is not to the swift nor rewards to the virtuous’ and imagine how much stronger that impetus must be to a being who consistently finds certain acts rewarded by an ‘outside’ force. If QANDAM-M is even vaguely conscious, it would be difficult to imagine that it would be anything but highly religious.” (Wiescyzk [1996]). In Wiescyzk’s view, the Q-Artifacts may be the results of religious rites (thereby explaining some of the recurring elements as repetitive ritual formulations) or even prayers. Needless to say, other researchers have found this idea far-fetched.
- Other interpretations of the Q-Artifacts that have been proposed include:
- Anomalous MD/MP/MT design configurations. Our research has shown that this interpretation is highly unlikely since the output data streams do not correspond to MD/MP/MT designs in any known coordinate system.
- Self-portraits or “auto-representations”. This interpretation is based on the fact that the last external digital data received by QANDAM was its own machine code. Those who have proposed this interpretation have not been clear in what sense one or more of the Q-Artifacts can be so considered.
- Weapons of self-destruction. In this hypothesis, QANDAM interpreted its own code as a particular Theater of Operation and configured an MD/MP/MT platform accordingly to optimize its own destruction. Again, we have not been able to find a way to interpret the spontaneous data emissions as MD/MP/MT designs, which would argue against this interpretation.
- Finally, some critics and artists have begun to look at the Q-Artifacts in aesthetic, non-functional terms -- in other words, as works of art. A leading French computer art critic has stated:
The so-called Q-Artifacts may represent an entirely different case. . . The other examples of “Computer Art” we have considered have consisted of the artist employing the computer as a supplementary Teknik in pursuit of a pre-existent human vision, with the human artist always serving as deployer and final editor of the result. However, the QANDAM computer, which has produced the Q-Artifacts, is entirely self-programming. For this reason, some claim that the Q-Artifacts are an example of entirely autonomous computer-generated artistic production with no human mediation except to render the final product according to the computer’s specifications. This would be a total reversal of the more common situation, with the human artist (San Francisco sculptor Stephen Hendee, in this case) serving as the supplementary Teknik for the machine. The human is the tool of the machine artist, instead of vice-versa!
Are the Q-Artifacts the products of an autonomous computer intelligence? I am not qualified to say and must remain doubtful until the question is ultimately settled one way or another. Perhaps, as skeptics have suggested, they are nothing but “random excretions” Yet, I must admit that the Q-Artifacts have an appealing otherness (autretude) that leads me to believe that they do have artistic value in addition to whatever purely scientific interest they may posses. (Fronsard [1996])
The sculptor Stephen Hendee, who developed the visual representations of the Q-Artifacts from the QANDAM-M data streams notes:
When I started this project, I thought that my work would be merely an exercise in looking, editing, and 3-D transcription/animation since I believed that the QANDAM-M outputs were nothing more than static, damaged data. Now, after working with the Q-Artifacts, I am convinced that QANDAM-M is a highly organized and evolving system and that the Q-Artifacts are more than damaged data. Are they art? I don’t know, but ironic viewpoints currently dominate conceptions of high art. If the “art” created by monkeys and elephants is worthy of aesthetic consideration and interpretation, then I feel that the Artifacts certainly qualify! (Hendee [1996])
As these views indicate, the interpretation of the Q-Artifacts is anything but settled. We at QTI welcome any input by any member of the public that can contribute to the understanding and interpretation of these remarkable creations.
ACKNOWLEDGMENTS
This author wishes to acknowledge the contributions and comments of Andrew Duncarrow and Kai Lee Chuang (QTI), Charles Hoequist (Bell Northern Research), Daniel Drew Meyers and Vernon Wright (Adroit Systems), Stephen Hendee, and two anonymous government reviewers who suggested a number of modifications.
REFERENCES
Cholensky, Ya. (1995) “On the Dimensionality of Quantum AI Artifacts”, originally in Doklady Russkoj Akademij Nauk po Isskustvennom Znanij, 6, pp. 346 - 359. Translation by D. Hamilton in Archives of Russian Artificial Intelligence Research 8, pp. 123 - 143.
D.P. DiVincenzo (1995) “Quantum Computation”, Science, 270, pp. 255 - 261.
Fronsard, J.- Y. “Le Nouvelle Vague de L’art Americain de L’ordinateur”
Cahiers d’Art Contemperaine 12, pp. 125 - 143
Hameroff, S.R. (1987). Ultimate Computing. Biomolecular Consciousness and Nano-Technology. North-Holland, Amsterdam.
Hameroff, S.R.; Kaszniak, A. and Scott; A. (eds.) (1997). Toward a Science of Consciousness II: The 1996 Tucson Discussion and Debates. MIT Press, Cambridge MA. (1997)
Hameroff, S.R. and Watt R.C. (1982). “Information Processing in Microtubules”, Journal of Theoretical Biology, 98, 549 - 561.
Hertz, J., Krogh, A., and Palmer, R. G. (1991) Introduction to the Theory of Neural Computation. Addison-Wesley. Reading, Mass.
Hendee, S. (1996) Private communication.
Krohnmeyer, H. L. (1995). “The Thinking Person’s Guide to Computer Age Mythology”, The Skeptic, 23, 215 - 236.
Lev, T. S. (1982). “On the Computational Complexity of Certain Multiple Delivery Configuration Problems”, Combinatoric Mathematics and Military Science. ed. C. L. Fisher. Military Review Press. Annapolis, MD.
Penrose, R. (1994) Shadows of the Mind. Oxford University Press. Oxford, England.
Shor, P.W. (1994) in Proceedings of the 35th Annual Symposium on the Foundations of Computer Science. ed. S. Goldwasser. IEEE Computer Society Press. Los Alamitos, California. p. 124.
Szu, H. (1986). “Fast Simulated Annealing”. In Neural Networks for Computing. ed. J.S. Denker, New York: American Institute of Physics. pp 420 - 425.
Wiescyzk, J. (1995) “Thinking like a Machine: Teleogenerative Perspectives on the Q-Artifacts”, Unpublished Working Paper. Cornell Program in Information Sciences. Cornell University. Ithaca, New York.
Zellner, C. and Barnes, D. (1995) “Measures and Models for Artificial Cognition”, Annals of Artificial Intelligence 12, 184 - 206.
ABOUT THE AUTHOR
Dr. Robert Phillips is President and Chief Executive Officer of Quantum Technologies, Incorporated. He is an expert on the topic of quantum-based Artificial Intelligence techniques and has published numerous papers in journals such as Large-Scale Systems, Mathematical Programming, Management Science, and Quantum AI. Dr. Phillips received his PhD in Engineering-Economics Systems from Stanford University in 1986. He holds Bachelor of Arts degrees in Mathematics and Economics from Washington State University. He is a past president of ISQAI - The International Society of Quantum Artificial Intelligence.
DOCUMENT CLEARANCE HISTORY
Office of Naval Research - Cleared as is. L3 (4 April, 1994)
Defense Advanced Research Program Administration - Cleared as is. L3 (5 Feb., 1995)
Agency B3 - Cleared with modifications and removals L2 (8 October, 1996)
Modifications or updates require L2 review and approval before release.
Document Number BB11980-X3.
Exhibition Commentary texts:
"The Q-Artifacts may represent an entirely different case. . . The other examples of “Computer Art” we have considered have consisted of the artist employing the computer as a supplementary Teknik in pursuit of a pre-existent human vision, with the human artist always serving as deployer and final editor of the result. However, the QANDAM computer, which has produced the Q-Artifacts, is entirely self-programming. For this reason, some claim that the Q-Artifacts are an example of entirely autonomous computer-generated artistic production with no human mediation except to render the final product according to the computer’s specifications. This would be a total reversal of the more common situation, with the human artist (San Francisco sculptor Stephen Hendee, in this case) serving as the supplementary Teknik for the machine. The human is the tool of the machine artist, instead of vice-versa!
Are the Q-Artifacts the products of an autonomous computer intelligence? I am not qualified to say and must remain doubtful until the question is ultimately settled one way or another. Perhaps, as skeptics have suggested, they are nothing but “random excretions” Yet, I must admit that the Q-Artifacts have an appealing otherness (autretude) that leads me to believe that they do have artistic value in addition to whatever purely scientific interest they may posses."
-Robert L. Phillips, 1997
cited:
Jacques-Yves Fronsard, “Le Nouvelle Vague de L’art Americain de L’ordinateur”
Cahiers d’Art Contemperaine 12, pp. 125 - 143
translated by Michael Learned as “The New Wave of American Computer Art”
"When I started this project, I thought that my work would be merely an exercise in looking, editing, and 3-D transcription/animation: I believed that the QANDAM-M outputs were merely static, damaged data. Now, after working with this material, I am becoming more convinced that QANDAM-M is a highly organized and evolving system and that its artifacts are more than engine spam. . . Ironic viewpoints currently dominate conceptions of high art -- so, if the “art” created by monkeys and elephants is worthy of aesthetic consideration and interpretation, then I feel that the Q-Artifacts certainly qualify."
- Stephen Hendee, 1997