Search
# | Search | Downloads | ||||
---|---|---|---|---|---|---|
1 | Bioinformatics scientists often describe their own scientific activities as the practice of working with large amounts of data using computing devices. An essential part of their self-identification is also the development of ways to visually represent the results of this work. Some of these methods are aimed at building convenient representations of data and demonstrating patterns present in them (graphics, diagrams, graphs). Others are ways of visualizing objects that are not directly accessible to human perception (microphotography, X-ray). Both the construction of visualizations and (especially) the creation of new computer visualization methods are considered in bioinformatics as significant scientific achievements. Representations of the three-dimensional structure of protein molecules play a special role in the inquiries of bioinformatics scientists. 3D-visualization of a macromolecule, on the one hand, is, like a graph, a representation of the results of computer processing of data arrays obtained by material methods – spatiotemporal coordinates of structural elements of the molecule. On the other hand, like microphotography, these 3D structures should serve as accurate representations of specific scientific objects. This leads to the parallel existence of two contradictory epistemic regimes: creative arbitrariness in making convenient, communicatively successful models, is combined with commitment to the object “as it really is”. The paradox is reinforced by the fact that the scientific study of objects in question (determining the properties of the structure, its functions, comparison with other structures) by means of computers does not require visualization at all. Its obviously high value for bioinformatics does not look justified if we take into account the prominent artificiality and artistry of the resulting images. However, the status of these images becomes clearer if we relate them to earlier notions of the role of the visual in scientific discovery. The highest estimation of visualization as the final result of scientific research was characteristic of Renaissance science. The artistic representation of ideal essential properties, instead of a strict correspondence to a particular biological object, is an epistemic virtue typical of the naturalists of the 17th and 18th centuries. Both suggested a close collaboration between the scientist and the artist; and standards for visualizing macromolecules in bioinformatics grow out of a similar collaboration (Geis’ drawings). The desire for maximum accuracy and detail inherits the regulation of “mechanical objectivity” (as Daston and Galison put it into words), for which it is also important to eliminate humans from the image production process (in bioinformatics, to transfer these functions to computer programs). Thus, 3D-visualization of protein structures bears traces of historically different value orientations, but the scientific practice of the 20th and 21st centuries, supplemented by computer technologies, allows them to be intertwined in particular disciplinary units. Keywords: epistemology, visualization, scientific object, bioinformatics, data analysis | 1017 | ||||
2 | The article examines the epistemological relations between the classical laboratory experiment, the thought experiment, and the computational experiment. In the context of the modern history of the philosophy and methodology of science from positivism to the so-called experimental turn and contemporary discussions of immaterial experiments, the question of the epistemological similarities and differences between material, thought, and computational experiments is raised, as well as the methodological specificity of the experiment as a concrete scientific method and a generic concept for this controversial, but de facto used taxonomy. A common feature of all quasi-experimental methods in scientific knowledge is their semiotic function as a means of ensuring objectivity, giving meaning to the formal structures of knowledge. The first section examines the so-called “experimental turn” in the philosophy of science, associated with the works of the Stanford School and the transition from understanding the experiment as “simply” armed observation to its interpretation as a practice of active intervention in reality and the producing of facts. The moment of “spontaneous realism” in experimental science and the presence, as noted by Ian Hacking, of a “life of their own” for experimental practices and the facts reproduced in them are emphasized. The second section is devoted to the epistemology of thought experiments. The arguments in favor of denying thought experiments’ “experimental nature” and recognizing them as a type of theoretical models that deal exclusively with logical consequences and logical integrity (consistency) of a theory are critically examined. Using the example of the EPR paradox and related plots in history of physics, the ability of thought experiments to create new knowledge and “live a life of their own” is emphasized, i.e., an ability to reproduce in different theoretical contexts and to give different results rather than only those supposedly fixed once and for all by their logical structure. The third section emphasizes that computational experiments and digital simulations are similar to thought experiments in their “immateriality”, but differ in the cognitive infrastructure used and in the transparency of obtaining results. While a thought experiment relies on the work of the imagination and provides the immediate clarity of obtaining a result, a computer simulation uses an “external” computational infrastructure and, due to the high complexity of models and calculations, makes the origin of specific observed results opaque to the researcher, which makes simulations closer to classical laboratory experiments. At the same time, the ability of modern computer simulations to model empirically non-existent objects, giving them observability, and to produce different results in different iterations emphasizes their methodological “experimentality” as sources of new quasi-empirical data. In conclusion, it is noted that a productive solution to the “taxonomic confusion” is the recognition of the essential epistemological kinship of material, thought and computational experiments, despite the exact degree of closeness has yet to be clarified. The presented material is intended for the lecture part of the courses Philosophy and Methodology of Science and Modeling, Forecasting and Expertise in Scientific Activity (taught, respectively, in semesters 3–4 and 7–8 to undergraduate students of Lomonosov Moscow State University’s Faculty of Philosophy), and is completely given by the authors within the framework of the course Experimental Practices in the Methodology of Social Sciences (Master's degree program of the Faculty of Philosophy). Keywords: methodology of science, experiment, experimental turn, philosophy of experimentation, thought experiment, computational experiment | 175 |