Computer modelling. Computer experiment

To give life to new design developments, to introduce new technical solutions into production, or to test new ideas, an experiment is needed. In the not too distant past, such an experiment could be carried out either in laboratory conditions on specially created installations for it, or in nature, that is, on a real sample of the product, subjecting it to all kinds of tests. To study, for example, the operational properties of any unit or unit, it was placed in a thermostat, frozen in special chambers, shaken on vibration stands, dropped, etc. It is good if this is a new watch or vacuum cleaner ~ there is little loss in destruction. And if an airplane or a rocket?

Laboratory and field experiments require large material expenditures and time, but their importance is nevertheless very great.

It has already been said that at the first stage, when analyzing the original object, elementary objects are identified, which in the process of modeling should be subjected to various experiments. If we go back to the example with the airplane, then for experiments with nodes and systems, as they say, all means are good. To check the streamlining of the hull, a wind tunnel and full-scale models of wings and fuselage are used, various simulation models are possible to test emergency power supply and fire safety systems, and a special stand is indispensable for testing the landing gear exhaust system.

With the development of computer technology, a new unique research method has appeared - a computer experiment. To help, and sometimes to replace experimental samples and test benches, in many cases came computer studies of models. The stage of carrying out a computer experiment includes two stages: drawing up a modeling plan and modeling technology.

Simulation plan should clearly reflect the sequence of work with the model.

Often, the plan is displayed as a sequence of numbered paragraphs describing the actions that must be taken by the researcher with the computer model. Here it is not necessary to specify which software tools should be used. The detailed plan is a kind of reflection of the strategy of the computer experiment.

The first step in this plan is always test design and then model testing.

Testing is the process of verifying the correctness of a model.

A test is a set of initial data for which the result is known in advance.

To be sure of the correctness of the simulation results obtained, it is necessary to first carry out a computer experiment on the model for the test compiled. In doing so, you should remember the following:

First, the test should always be focused on checking the developed algorithm for the functioning of the computer model. The test does not reflect its semantic content. However, the results obtained in the process of testing may give you the idea of ​​changing the initial information or sign model, where, first of all, the semantic content of the problem statement is laid.

Secondly, the initial data in the test may not reflect the real situation at all. It can be any collection of simple numbers or symbols. It is important that you can know in advance the expected result for a specific version of the initial data. For example, the model is presented in the form of complex mathematical relationships. We need to test it. You select several options for the simplest values ​​of the initial data and calculate the final answer in advance, that is, you know the expected result. Next, you conduct a computer experiment with these initial data and compare the result with the expected one. They must match. If they do not match, you need to look for and eliminate the cause.

After testing, when you have confidence in the correct functioning of the model, you go directly to modeling technology.

Modeling technology is a set of purposeful user actions over a computer model.

Each experiment should be accompanied by an understanding of the results, which will form the basis for the analysis of the simulation results.

A method for studying a certain phenomenon under conditions controlled by an observer. [ ]. Differs from observation by active interaction with the object under study. Usually an experiment is carried out as part of a scientific study and serves to test a hypothesis, to establish causal relationships between phenomena. Experiment is the cornerstone of the empirical approach to knowledge. Popper's criterion puts forward the possibility of setting up an experiment as the main difference between a scientific theory and a pseudoscientific one.

Peculiarities:

  • the researcher himself causes the phenomenon under study, and does not wait for it to happen;
  • can change the conditions of the studied process;
  • in the experiment, you can alternately exclude individual conditions in order to establish regular connections;
  • the experiment makes it possible to vary the quantitative ratio of conditions and to carry out mathematical processing of the data.

Experiment models

There are several models of experiment [ ] :

Psychological experiment

A psychological experiment is an experiment conducted in special conditions to obtain new scientific knowledge through the purposeful intervention of the researcher in the life of the subject.

Thought experiment

A thought experiment in philosophy, physics and some other areas of knowledge is a type of cognitive activity in which the structure of a real experiment is reproduced in the imagination. As a rule, a thought experiment is carried out within the framework of a certain model (theory) to check its consistency. When conducting a thought experiment, contradictions of the internal postulates of the model or their incompatibility with external (in relation to this model) principles that are considered to be unconditionally true (for example, with the law of conservation of energy, the principle of causality, etc.) may be revealed.

Critical experiment

A critical experiment is an experiment, the outcome of which unambiguously determines whether a particular theory or hypothesis is correct. This experiment should give a predictable result that cannot be inferred from other generally accepted hypotheses and theories, some of which may be related to makeup.

Pilot experiment

Pilot experiment is a pilot experimental study in which the main hypothesis, research approaches, plan are tested, the operability of the methods used is checked, the technical aspects of the experimental procedures are clarified. It is carried out on a small sample, without strict control of variables. A pilot experiment makes it possible to eliminate gross errors in the formulation of a hypothesis, to specify the goal, to clarify the experimental procedure, and to assess the possibility of obtaining an experimental effect.

Helper methods

  • testing
  • analysis of activity products
  • math statistics

L. V. Pigalitsyn,
, www.levpi.narod.ru, MOU Secondary School No. 2, Dzerzhinsk, Nizhny Novgorod Region.

Computer physics experiment

4. Computational computer experiment

The computational experiment turns
into an independent field of science.
R. G. Efremov, Doctor of Physical and Mathematical Sciences

Computational computer experiment is in many respects similar to the usual (full-scale) one. This includes planning experiments, creating an experimental setup, performing control tests, conducting a series of experiments, processing experimental data, interpreting them, etc. However, it is carried out not over a real object, but over its mathematical model; the role of an experimental setup is played by a computer equipped with a special program.

The computational experiment is becoming more and more popular. They are engaged in many institutes and universities, for example, at Moscow State University. MV Lomonosov, Moscow State Pedagogical University, the Institute of Cytology and Genetics of the SB RAS, the Institute of Molecular Biology of the Russian Academy of Sciences, etc. Scientists can already obtain important scientific results without a real, "wet" experiment. For this, there is not only computer power, but also the necessary algorithms, and most importantly - understanding. If they used to share - in vivo, in vitro, - now more in silico... In fact, a computational experiment is becoming an independent field of science.

The advantages of such an experiment are obvious. It is, as a rule, cheaper than natural one. It can be easily and safely interfered with. It can be repeated and interrupted at any time. During this experiment, you can simulate conditions that cannot be created in the laboratory. However, it is important to remember that a computational experiment cannot completely replace a natural one, and the future lies with their reasonable combination. A computational computer experiment serves as a bridge between a full-scale experiment and theoretical models. The starting point for numerical modeling is the development of an idealized model of the considered physical system.

Let's consider several examples of a computational physics experiment.

Moment of inertia. In "Open Physics" (2.6, part 1) there is an interesting computational experiment to find the moment of inertia of a rigid body using the example of a system consisting of four balls strung on one spoke. You can change the position of these balls on the spoke, as well as choose the position of the axis of rotation, passing it both through the center of the spoke and through its ends. For each location of the balls, the students use Steiner's theorem on the parallel transfer of the axis of rotation to calculate the value of the moment of inertia. The data for calculations is provided by the teacher. After calculating the moment of inertia, the data is entered into the program and the results obtained by the students are checked.

"Black box". To implement the computational experiment, my students and I created several programs for the study of the contents of an electrical "black box". It can contain resistors, incandescent bulbs, diodes, capacitors, coils, etc.

It turns out that in some cases it is possible, without opening the "black box", to find out its contents by connecting various devices to the input and output. Of course, at the school level, this can be done for a simple three- or four-port network. Such tasks develop students' imagination, spatial thinking and creativity, not to mention the need to have deep and lasting knowledge to solve them. Therefore, it is not by chance that at many all-Union and international olympiads in physics, the study of "black boxes" in mechanics, heat, electricity and optics is proposed as experimental problems.

In the special course classes, I conduct three real laboratory work, when in the "black box":

- only resistors;

- resistors, incandescent lamps and diodes;

- resistors, capacitors, coils, transformers and oscillatory circuits.

Structurally, "black boxes" are arranged in empty matchboxes. Inside the box is an electrical circuit, and the box itself is sealed with tape. Research is carried out using instruments - avometers, generators, oscilloscopes, etc. for this it is necessary to build the I - V characteristic and the frequency characteristic. Students enter the instrument readings into a computer, which processes the results and builds the I – V characteristic and AFC. This allows students to figure out what parts are in the black box and determine their parameters.

When carrying out frontal laboratory work with "black boxes", difficulties arise due to the lack of instruments and laboratory equipment. Indeed, in order to carry out research it is necessary to have, say, 15 oscilloscopes, 15 sound generators, etc. 15 sets of expensive equipment that most schools do not have. And this is where virtual "black boxes" come to the rescue - the corresponding computer programs.

The advantage of these programs is that research can be done simultaneously by the whole class. As an example, consider a program that implements "black boxes" containing only resistors using a random number generator. There is a "black box" on the left side of the desktop. It has an electrical circuit consisting only of resistors that can be located between points A, B, C and D.

The student has three devices at his disposal: a power source (its internal resistance is taken equal to zero to simplify calculations, and the EMF is randomly generated by the program); voltmeter (internal resistance is equal to infinity); ammeter (internal resistance is zero).

When you start the program inside the "black box", an electrical circuit is randomly generated containing from 1 to 4 resistors. The student can make four attempts. After pressing any key, he is offered to connect any of the offered devices in any sequence to the terminals of the "black box". For example, he connected to the terminals AB a current source with an EMF = 3 V (the EMF value is randomly generated by the program, in this case it turned out to be 3 V). To terminals CD I connected a voltmeter, and its readings turned out to be 2.5 V. From this it should be concluded that there is at least a voltage divider in the "black box". To continue the experiment, instead of a voltmeter, you can connect an ammeter and take readings. These data are clearly not enough to solve the mystery. Therefore, two more experiments can be carried out: the current source is connected to the terminals CD, and the voltmeter and ammeter - to the terminals AB... The data obtained in this case will already be quite enough to unravel the contents of the "black box". The student draws a diagram on paper, calculates the parameters of the resistors and shows the results to the teacher.

The teacher, after checking the work, enters the appropriate code into the program, and the circuit inside this "black box" and the parameters of the resistors appear on the desktop.

The program was written by my students in the BASIC language. To run it in Windows XP or in Windows Vista you can use the emulator program DOS, for example, DosBox... You can download it from my website www.physics-computer.by.ru.

If there are nonlinear elements inside the "black box" (incandescent lamps, diodes, etc.), then in addition to direct measurements, you will have to take the I - V characteristic. For this purpose, it is necessary to have a current source, voltage, at the outputs of which the voltage can be changed from 0 to a certain value.

To study inductances and capacitances, it is necessary to remove the frequency response using a virtual sound generator and an oscilloscope.


Speed ​​selector. Consider another program from Open Physics (2.6, Part 2), which allows us to carry out a computational experiment with a velocity selector in a mass spectrometer. To determine the mass of a particle using a mass spectrometer, it is necessary to perform a preliminary selection of charged particles in terms of velocities. This goal is served by the so-called speed selectors.

In the simplest speed selector, charged particles move in crossed uniform electric and magnetic fields. An electric field is created between the plates of a flat capacitor, a magnetic field is created in the gap of an electromagnet. starting speed υ charged particles is directed perpendicular to the vectors E and V .

Two forces act on a charged particle: electrical force q E and the Lorentz magnetic force q υ × B ... Under certain conditions, these forces can precisely balance each other. In this case, the charged particle will move uniformly and rectilinearly. Having flown through the capacitor, the particle will pass through a small hole in the screen.

The condition for a rectilinear trajectory of a particle does not depend on the charge and mass of the particle, but depends only on its velocity: qE = qυBυ = E / B.

In a computer model, you can change the values ​​of the electric field strength E, magnetic induction B and the initial velocity of the particles υ ... Velocity selection experiments can be performed for an electron, a proton, an alpha particle, and fully ionized uranium-235 and uranium-238 atoms. A computational experiment in this computer model is carried out as follows: students are told which charged particle flies into the speed selector, the electric field strength and the initial speed of the particle. Students calculate the magnetic induction using the formulas above. After that, the data is entered into the program and the flight of the particle is observed. If the particle flies horizontally inside the velocity selector, then the calculations are done correctly.

More complex computational experiments can be performed using the free package "MODEL VISION for WINDOWS". Plastic bag ModelVisionStudium (MVS) is an integrated graphical shell for quickly creating interactive visual models of complex dynamic systems and performing computational experiments with them. The package was developed by the research group "Experimental Object Technologies" at the Department of "Distributed Computing and Computer Networks" of the Faculty of Technical Cybernetics, St. Petersburg State Technical University. Freely redistributable free version of the package MVS 3.0 is available on the website www.exponenta.ru. Environment simulation technology MVS is based on the concept of a virtual laboratory bench. On the stand, the user places virtual blocks of the simulated system. Virtual blocks for the model are either selected from the library, or created by the user again. Plastic bag MVS is designed to automate the main stages of a computational experiment: building a mathematical model of the object under study, generating a software implementation of the model, studying the properties of the model and presenting the results in a form convenient for analysis. The object under study can belong to the class of continuous, discrete or hybrid systems. The package is best suited for the study of complex physical and technical systems.


As an example consider a fairly popular problem. Let the material point be thrown at a certain angle to the horizontal plane and collides with this plane absolutely elastically. This model has become almost mandatory in the demo set of sample simulation packages. Indeed, this is a typical hybrid system with continuous behavior (flight in a gravitational field) and discrete events (bounces). This example also illustrates an object-oriented approach to modeling: a ball flying in the atmosphere is a descendant of a ball flying in an airless space, and automatically inherits all the common features, while adding its own characteristics.

The last, final, from the user's point of view, stage of modeling is the stage of describing the form of presentation of the results of a computational experiment. These can be tables, graphs, surfaces, and even animations that illustrate the results in real time. Thus, the user really observes the dynamics of the system. Points in the phase space can move, structural elements drawn by the user, the colors can change, and the user can follow on the screen, for example, heating or cooling processes. In the created software packages for the implementation of the model, special windows can be provided that allow, during a computational experiment, to change the values ​​of the parameters and immediately see the consequences of the changes.

A lot of work on visual modeling of physical processes in MVS held at the Moscow State Pedagogical University. A number of virtual works on the course of general physics have been developed there, which can be associated with real experimental installations, which allows you to simultaneously observe on the display in real time the change in the parameters of both a real physical process and the parameters of its model, clearly demonstrating its adequacy. As an example, I cite seven laboratory works on mechanics from the laboratory workshop of the Internet portal of open education, which corresponds to the existing state educational standards for the specialty "Physics teacher": the study of rectilinear motion using an Atwood machine; measuring the speed of the bullet; addition of harmonic vibrations; measuring the moment of inertia of a bicycle wheel; the study of the rotational motion of a rigid body; determination of the acceleration of gravity using a physical pendulum; study of free vibrations of a physical pendulum.

The first six are virtual and simulated on a PC in ModelVisionStudiumFree, and the latter has both a virtual version and two real ones. In one, intended for distance learning, the student must independently make a pendulum from a large paper clip and an eraser and, by hanging it under the shaft of a computer mouse without a ball, obtain a pendulum, the deflection angle of which is read by a special program and should be used by the student when processing the results of the experiment. This approach allows some of the skills necessary for experimental work to be worked out only on a PC, and the rest - when working with available real devices and with remote access to equipment. In another version, intended for home preparation of full-time students to perform laboratory work in the workshop of the Department of General and Experimental Physics of the Faculty of Physics of Moscow State Pedagogical University, the student works out the skills of working with an experimental installation on a virtual model, and in the laboratory conducts an experiment simultaneously on a specific real installation and with its virtual model. At the same time, he uses both traditional measuring instruments in the form of an optical scale and a stopwatch, and more accurate and fast-acting means - a motion sensor based on an optical mouse and a computer timer. Simultaneous comparison of all three representations (traditional, refined with the help of electronic sensors connected to a computer, and a model) of the same phenomenon allows us to conclude about the limits of the model's adequacy, when the data of computer simulation after a while begin to differ more and more from the readings. filmed on a real installation.

The above does not exhaust the possibilities of using a computer in a physical computational experiment. So for a creatively working teacher and his students there will always be untapped opportunities in the field of virtual and real physical experiment.

If you have any comments and suggestions on various types of physical computer experiment, write to me at:

Computer modelling - the basis for the representation of knowledge in a computer. For the creation of new information, computer modeling uses any information that can be updated with the help of a computer. The progress of modeling is associated with the development of computer modeling systems, and progress in information technology - with the updating of the experience of modeling on a computer, with the creation of banks of models, methods and software systems that allow collecting new models from bank models.

A kind of computer simulation is a computational experiment, that is, an experiment carried out by an experimenter on a system or process under study with the help of an experimental tool - a computer, computer environment, technology.

A computational experiment is becoming a new tool, a method of scientific knowledge, a new technology also due to the growing need to move from the study of linear mathematical models of systems (for which research methods and theory are well known or developed) to the study of complex and nonlinear mathematical models of systems (the analysis of which is much more difficult). Roughly speaking, our knowledge of the world around us is linear, and the processes in the outside world are non-linear.

A computational experiment allows you to find new patterns, test hypotheses, visualize the course of events, etc.

To give life to new design developments, to introduce new technical solutions into production, or to test new ideas, an experiment is needed. In the not too distant past, such an experiment could be carried out either in laboratory conditions on specially created installations for it, or in nature, that is, on a real sample of the product, subjecting it to all kinds of tests.

With the development of computer technology, a new unique research method has appeared - a computer experiment. A computer experiment includes a certain sequence of work with the model, a set of purposeful actions of the user over the computer model.

Stage 4. Analysis of simulation results.

Final goal modeling - making a decision that should be developed on the basis of a comprehensive analysis of the results. This stage is crucial - either you continue the research or finish. Perhaps you know the expected result, then you need to compare the received and expected results. In case of a match, you can make a decision.

The results of testing and experiments are the basis for developing a solution. If the results do not correspond to the goals of the task, then mistakes were made in the previous stages. This can be either a too simplified construction of an information model, or an unsuccessful choice of a method or environment for modeling, or a violation of technological techniques when building a model. If such errors are identified, then it is required model correction , i.e., return to one of the previous stages. Process repeats until the experiment results respond goals modeling. The main thing is to always remember that a detected error is also a result. As folk wisdom says, they learn from mistakes.

Simulation programs

ANSYS- universal software system of finite element ( FEM) analysis, which has existed and developed over the past 30 years, is quite popular among specialists in the field of computer engineering ( CAE, Computer-Aided Engineering) and FE solutions of linear and nonlinear, stationary and nonstationary spatial problems of solid mechanics and structural mechanics (including nonstationary geometrically and physically nonlinear problems of contact interaction of structural elements), problems of fluid and gas mechanics, heat transfer and heat transfer, electrodynamics , acoustics, as well as the mechanics of coupled fields. Simulation and analysis in some industries avoids costly and time-consuming design-build-test cycles. The system works on the basis of a geometric kernel Parasolid .

AnyLogic - software for simulation complex systems and processes developed by Russian by Ex J Technologies ( English XJ Technologies). The program has graphical user environment and allows you to use Java language for model development .

AnyLogic models can be based on any of the main simulation paradigms: discrete event simulation, system dynamics, and agent-based modeling.

System dynamics and discrete-event (process) modeling, by which we mean any development of ideas GPSS are traditional well-established approaches, agent-based modeling is relatively new. System dynamics operates mainly with time-continuous processes, while discrete-event and agent-based modeling - with discrete ones.

System dynamics and discrete-event modeling have historically been taught to completely different groups of students: management, production engineers, and control system design engineers. As a result, three distinct, virtually non-overlapping communities have emerged that have little to no communication with each other.

Until recently, agent-based modeling was a strictly academic direction. However, the growing demand for global optimization from business has forced leading analysts to pay attention to agent-based modeling and its combination with traditional approaches in order to obtain a more complete picture of the interaction of complex processes of various nature. This is how the demand for software platforms was born, allowing the integration of different approaches.

Now let's consider the approaches of simulation modeling on the scale of the level of abstraction. System dynamics, replacing individual objects with their aggregates, presupposes the highest level of abstraction. Discrete Event Modeling works in the low and mid range. As for agent-based modeling, it can be applied at almost any level and at any scale. Agents can represent pedestrians, cars or robots in a physical space, a customer or salesperson at an intermediate level, or competing companies at a high level.

When developing models in AnyLogic, you can use concepts and tools from several modeling methods, for example, in an agent-based model, use system dynamics methods to represent changes in the state of the environment or take into account discrete events in a continuous model of a dynamic system. For example, supply chain management using simulation requires the description of the participants in the supply chain by agents: producers, sellers, consumers, warehouse network. At the same time, production is described in the framework of discrete-event (process) modeling, where the product or its parts are applications, and cars, trains, and stackers are resources. The deliveries themselves are represented by discrete events, but the demand for goods can be described by a continuous system-dynamic diagram. The ability to mix approaches allows you to describe the processes of real life, and not to adjust the process to the available mathematical apparatus.

LabVIEW (English Lab oratory V irtual I nstrumentation E ngineering W orkbench) is development environment and platform for executing programs created in the graphical programming language "G" of the company National Instruments(USA). The first version of LabVIEW was released in 1986 for Apple Macintosh, there are currently versions for UNIX, GNU / Linux, Mac OS etc., and the most developed and popular versions are for Microsoft Windows.

LabVIEW is used in data acquisition and processing systems, as well as to control technical objects and technological processes. Ideologically, LabVIEW is very close to SCADA-systems, but unlike them, it is more focused on solving problems not so much in the field APCS how much in the area ASNI.

MATLAB(short for English « Matrix Laboratory» ) is a term referring to a package of applied programs for solving problems of technical calculations, as well as to the programming language used in this package. MATLAB employ more than 1,000,000 engineers and scientists, it works on most modern operating systems including GNU / Linux, Mac OS, Solaris and Microsoft Windows .

Maple- software package, computer algebra system... It is a product of Waterloo Maple Inc. 1984 year releases and markets software products focused on complex mathematical calculations, data visualization and modeling.

The Maple system is designed to symbolic computation, although it has a number of tools for solving numerically differential equations and finding integrals... Possesses advanced graphic tools. Has its own programming language resembling Pascal.

Mathematica - computer algebra system company Wolfram Research... Contains many functions both for analytical transformations and for numerical calculations. In addition, the program supports work with graphics and sound, including the construction of two- and three-dimensional charts functions, drawing arbitrary geometric shapes, import and export images and sound.

Forecasting tools- software products that have functions for calculating forecasts. Forecasting is one of the most important types of human activity today. Even in ancient times, forecasts allowed people to calculate periods of droughts, dates of solar and lunar eclipses, and many other phenomena. With the advent of computer technology, forecasting received a powerful impetus for development. One of the first applications of computers was to calculate the ballistic trajectory of projectiles, that is, in fact, to predict the point of impact of a projectile on the ground. This type of forecast is called static forecast. There are two main categories of forecasts: static and dynamic. The key difference is that dynamic forecasts provide information about the behavior of the object under study over a significant period of time. In turn, static forecasts reflect the state of the object under study only at a single moment in time and, as a rule, in such forecasts, the time factor in which the object undergoes changes plays an insignificant role. Today there are a large number of tools that allow you to make forecasts. All of them can be classified according to many criteria:

Tool name

Scope of application

Implemented models

Required user training

Ready for operation

Microsoft Excel , OpenOffice.org

wide application

algorithmic, regression

basic knowledge of statistics

significant improvement required (implementation of models)

Statistica , SPSS , E-views

research

a wide range of regression, neural network

boxed product

Matlab

research, application development

algorithmic, regression, neural network

special mathematical education

programming required

SAP APO

business forecasting

algorithmic

no deep knowledge required

ForecastPro , ForecastX

business forecasting

algorithmic

no deep knowledge required

boxed product

Logility

business forecasting

algorithmic, neural network

no deep knowledge required

significant revision is required (for business processes)

ForecastPro SDK

business forecasting

algorithmic

basic knowledge of statistics required

programming required (integration with software)

iLog , AnyLogic , iThink , MatlabSimulink , GPSS

application development, modeling

imitation

special mathematical education required

programming required (for the specifics of the area)

SP LIRA- a multifunctional software package designed for the design and calculation of machine-building and building structures for various purposes. Calculations in the program are performed for both static and dynamic effects. The basis for calculations is finite element method(FEM). Various plug-in modules (processors) allow you to select and check sections of steel and reinforced concrete structures, model the soil, calculate bridges and the behavior of buildings during installation, etc.

In the above definition, the term "experiment" has a dual meaning. On the one hand, in a computer experiment, as well as in a real one, the system's responses to certain changes in parameters or to external influences are investigated. Temperature, density, composition are often used as parameters. And the impacts are most often realized through mechanical, electrical or magnetic fields. The only difference is that the experimenter deals with a real system, while in a computer experiment, the behavior of a mathematical model of a real object is considered. On the other hand, the ability to obtain rigorous results for well-defined models makes it possible to use a computer experiment as an independent source of information to test the predictions of analytical theories and, therefore, in this capacity, the simulation results play the role of the same standard as experimental data.

From all that has been said, it is clear that there is a possibility of two very different approaches to setting up a computer experiment, which is due to the nature of the problem being solved and thereby determines the choice of a model description.

First, calculations by MD or MC methods can pursue purely utilitarian goals related to predicting the properties of a specific real system and comparing them with a physical experiment. In this case, you can make interesting predictions and carry out research in extreme conditions, for example, at ultrahigh pressures or temperatures, when a real experiment for various reasons is not feasible or requires too much material costs. Computer simulation is often the only way to obtain the most detailed ("microscopic") information about the behavior of a complex molecular system. This is especially clearly shown by numerical experiments of a dynamic type with various biosystems: globular proteins in the native state, fragments of DNA and RNA. , lipid membranes. In a number of cases, the data obtained made it necessary to revise or significantly change the previously existing ideas about the structure and functioning of these objects. It should be borne in mind that since various kinds of valence and non-valence potentials are used in such calculations, which only approximate the true interactions of atoms, this circumstance ultimately determines the degree of correspondence between the model and reality. Initially, the solution of the inverse problem is carried out, when the potentials are calibrated according to the available experimental data, and only then these potentials are used to obtain more detailed information about the system. Sometimes, the parameters of interatomic interactions can in principle be found from quantum-chemical calculations performed for simpler model compounds. When simulating by MD or MC methods, a molecule is interpreted not as a collection of electrons and nuclei obeying the laws of quantum mechanics, but as a system of bound classical particles - atoms. This model is called mechanical model of a molecule .

The goal of another approach to setting up a computer experiment may be to understand the general (universal or model-invariant) patterns of behavior of the system under study, that is, those patterns that are determined only by the most typical features of a given class of objects, but not by the details of the chemical structure of a particular compound. That is, in this case, a computer experiment is aimed at establishing functional relationships, and not calculating numerical parameters. This ideology is most clearly present in the scaling theory of polymers. From the point of view of this approach, computer modeling acts as a theoretical tool, which, first of all, allows you to check the conclusions of existing analytical methods of the theory or supplement their predictions. This interaction between analytical theory and computer experiment can be very fruitful when identical models can be used in both approaches. The most striking example of this kind of generalized models of polymer molecules is the so-called lattice model . On its basis, many theoretical constructions have been made, in particular, related to the solution of the classical and, in a sense, the main problem of the physicochemistry of polymers on the effect of bulk interactions on the conformation and, accordingly, on the properties of a flexible polymer chain. Bulk interactions are usually understood to mean short-range repulsive forces that arise between links distant along the chain when they approach each other in space due to random bends of the macromolecule. In the lattice model, a real chain is considered as a broken trajectory that passes through the nodes of a regular lattice of a given type: cubic, tetrahedral, etc. The occupied lattice sites correspond to polymer links (monomers), and the segments connecting them correspond to chemical bonds in the macromolecule skeleton. The prohibition of self-intersections of the trajectory (or, in other words, the impossibility of simultaneous entry of two or more monomers into one lattice site) simulates bulk interactions (Fig. 1). That is, if, for example, if the MC method is used and, when a randomly selected link is displaced, it falls into an already occupied node, then such a new conformation is discarded and is no longer taken into account in calculating the parameters of the system of interest. The different chain arrangements on the lattice correspond to the conformations of the polymer chain. The required characteristics are averaged over them, for example, the distance between the ends of the chain R.

The study of such a model makes it possible to understand how volumetric interactions affect the dependence of the rms value on the number of links in the chain N . Of course the magnitude , determining the average size of the polymer coil, plays a major role in various theoretical constructions and can be measured experimentally; however, there is still no exact analytical formula for calculating the dependence on N in the presence of volumetric interactions. You can also introduce an additional energy of attraction between those pairs of links that fall into the adjacent lattice sites. By varying this energy in a computer experiment, it is possible, in particular, to investigate an interesting phenomenon called the "coil - globule" transition, when, due to the forces of intramolecular attraction, an unfolded polymer coil contracts and turns into a compact structure - a globule resembling a liquid microscopic drop. Understanding the details of this transition is important for the development of the most general ideas about the course of biological evolution that led to the emergence of globular proteins.

There are various modifications of lattice models, for example, those in which the bond lengths between links do not have fixed values, but can vary within a certain interval, which guarantees only the prohibition of self-intersections of the chain, this is how the widespread model with "fluctuating bonds" works. However, all lattice models have in common that they are discrete, that is, the number of possible conformations of such a system is always finite (although it can be astronomical even with a relatively small number of links in the chain). All discrete models have very high computational efficiency, but, as a rule, they can only be investigated by the Monte Carlo method.

For a number of cases, are used continual generalized polymer models that are capable of changing conformation in a continuous manner. The simplest example is a chain made up of a given number N solid balls connected in series by rigid or elastic bonds. Such systems can be studied by both the Monte Carlo method and the molecular dynamics method.

Share this