What is multimedia what are the main components of multimedia. The main components of multimedia presentation of information

Today, the term "multimedia" is quite understandable - it is a combination of well-known methods of transmitting information, such as images, speech, writing, gestures. This combination is, as a rule, deeply thought out, collected from different elements that complement each other, to create an overall intelligible picture. All this can be observed on almost everyone information resource such as a news feed with photos or attached videos. The project can be as well-formed, when the story is built by the creator and goes linear, and there are also several other types, such as interactivity and transmedia, which makes the plot non-linear and creates opportunities for the user for his own script. All this is additional advanced features for creating more interesting content that the user will want to come back to again and again.

The main thing in the concept of "multimedia" is that the combination of basic media elements builds on the basis of a computer or any digital technology. It follows that the standard constituents of multimedia have a more extended meaning. Vaughan, T. Multimedia: Making it work (7th ed.). New Delhi: Mac-Graw Hill. 2008. pp. 1-3, 25-40, 53-60:

1. Text. Written language is the most common way of conveying information, being one of the main components of multimedia. Originally, these were print media such as books and newspapers, which used a variety of fonts to display letters, numbers, and special characters. Regardless, multimedia products include photos, audio and video, but text can be the most common type of data found in multimedia applications. In addition, text also provides opportunities to expand the traditional power of writing by linking it to other media, making it interactive.

a. Static text. In static text, words are laid out to fit well into the graphical environment. Words are embedded in charts in the same way as charts and explanations are located on the pages of a book, that is, the information is well thought out and there is an opportunity not only to see photographs, but also to read textual information Kindersley, P. (1996). Multimedia: The complete guide. New York: DK ..

b. Hypertext. The hypertext file system consists of nodes. It contains text and links between nodes, which define paths that a user can use to access the text inconsistently. Links represent associations of meaning and can be viewed as cross-referencing. This structure is created by the author of the system, although in more complex hypertext systems the user can define his own paths. Hypertext provides the user with flexibility and choice as they move through the material. Well-formatted sentences and paragraphs, spacing and punctuation also affect the readability of the text.

2. Sound. Sound is the most sensual element of multimedia: it is direct speech in any language, from whispering to shouting; it is something that can provide the pleasure of listening to music, create a striking background special effect or mood; it is something that can create an artistic image by adding the effect of the presence of a narrator to a text site; will help you learn the pronunciation of a word in another language. Sound pressure level is measured in decibels, which should be within the range of sufficient perception of the sound volume by the human ear.

a. Musical Instrument Digital Identifier (MIDI) MIDI is a communication standard developed in the early 1980s for electronic musical instruments and computers. It is a verbatim representation of music stored in numerical form. MIDI is the fastest, easiest and most flexible music score composing tool in your multimedia project. Its quality depends on the quality of the musical instruments and the capabilities of the sound system. Vaughan, T. Multimedia: Making it work (7th ed.). New Delhi: Mac-Graw Hill. 2008. pp.106-120

b. Digitized and recorded sound (Digital Audio). Digitized audio is a sample in which each fraction of a second corresponds to a sample of sound stored as digital information in bits and bytes. The quality of this digital recording depends on how often the samples are taken (sample rate) and how many numbers are used to represent the value of each sample (bit depth, sample size, resolution). The more often a sample is taken and the more data is stored about it, the better the resolution and quality of the captured audio when it is played. Digital audio quality also depends on the quality of the original audio source, the capture devices that support the software, and the ability to reproduce the environment.

3. Image. It is an important component of multimedia, since it is known that a person receives most of the information about the world through sight, and the image is always what the text visualizes. Dvorko, N. I. Basics of directing multimedia programs. SPbGUP, 2005. ISBN 5-7621-0330-7. - With. 73-80. Images are computer generated in two ways, as bitmaps and also as vectors Vaughan, T. Multimedia: Making it work (7th ed.). New Delhi: Mac-Graw Hill. 2008. pp.70-81.

a. Raster or Bitmap images. The most common form of storage for images on a computer is a raster. It's a simple array of tiny dots called pixels that form a bitmap. Each pixel is made up of two or more colors. Color depth is determined by the amount of data in bits used to determine the number of colors, for example, one bit is two colors, four bits mean sixteen colors, eight bits already represent 256 colors, 16 bits give 65536 colors, and so on. Depending on the hardware capabilities, each dot can display over two million colors. A large image means that the picture will look more real compared to what the eye sees or the original product. This means proportions, size, color and texture should be as accurate as possible.

b. Vector image. The creation of such images is based on the drawing of elements or objects, such as lines, rectangles, circles, and so on. The advantage of a vector image is the relatively small amount of data required to represent the image and therefore does not require a large amount of storage space. An image consists of a set of commands that are executed when needed. A bitmap requires a certain number of pixels to produce the appropriate height, width, and color depth, while a vector image is based on a relatively limited number of drawing commands. A degradation in the quality of vector images is the limited level of detail that can be presented in a picture. Compression is used to reduce the size of the image file, which is useful for storing large numbers of images and to increase the transfer rate of images. The compression formats used for this purpose are GIF, TIFF, and JPEG Hillman, D. Multimedia: Technology and applications. New Delhi: Galgotia. 1998 ..

4. Video. It is defined as the display of recorded real-life events on a TV screen or computer monitor. Embedding videos in multimedia applications is a powerful means of conveying information. It can include personality elements that other media lack, such as portraying the personality of the presenter. Videos can be classified into two types, analog video and digital video.

a. Analog video. This type of video data is stored on any non-computer media such as videotapes, laser discs, films, etc. They are divided into two types, composite and component analog video:

i. Composite Analog Video has all video components, including luminance, color and timing, combined into a single signal. Composing or combining video components results in loss of color, clarity, and performance loss in video quality. Loss of productivity means loss of quality when copied for editing or other purposes. This recording format has been used to record videos on magnetic tapes such as Betamax and VHS. Composite video is also susceptible to quality loss from one generation to the next.

ii. Component Analog Video is considered to be more advanced than composite video. It takes the various components of a video, such as color, brightness, and timing, and breaks them down into separate signals. S-VHS and HI-8 are examples of this type of analog video in which color and brightness are stored on one track and information on another. In the early 1980s, Sony released a new portable, professional video format in which signals are stored on three separate tracks.

b. Digital Video is the most entertaining multimedia medium and is a powerful tool for bringing computer users closer to the real world. Digital video requires a large amount of storage space, because if a still high quality color image on a computer screen requires one megabyte or more of memory to store, then the image must be resized at least thirty times in order to ensure visibility of motion. second, and storage memory requires thirty megabytes for one second of video. Thus, the more times a picture is replaced, the better the video quality. Video requires high bandwidth to transfer data in a networked environment. For this, there are digital video compression schemes. There are video compression standards like MPEG, JPEG, Cinepak and Sorenson. In addition to video compression, there are streaming technologies such as Adobe Flash, Microsoft Windows Media, QuickTime, and Real Player that provide acceptable video playback at low Internet bandwidth. QuickTime and Real Video are the most commonly used for wide distribution. Digital video formats can be divided into two categories, composite video and component video.

i. Composite digital recording formats encode information in a binary system (0 and 1). It retains some of the weaknesses of analog composite video, such as color and image resolution, and loss of quality when making copies.

ii. The component digital format is uncompressed and has very high quality image making it very expensive.

iii. Videos can be used in many areas. Video recordings can improve understanding of the subject by following the explanation. For example, if we want to show dance steps used in different cultures, then the video will reflect this more easily and effectively. Vaughan, T. Multimedia: Making it work (7th ed.). New Delhi: Mac-Graw Hill. 2008. pp. 165-170

Today, multimedia is developing very rapidly in the field of information technology. The ability of computers to process various types of storage media makes them suitable for a wide range of applications, and most importantly, more and more people have the opportunity not only to look at various multimedia projects, but also to create them themselves.

Often, the concept of "multimedia" (in general, a very controversial term) is described as the presentation of information in the form of a combination of text, graphics, video, animation and sound. Analyzing this list, we can say that the first four components (text, graphics, video and animation) are different options for displaying information by graphic means that belong to one environment (and not to "many environments", or multimedia), namely - to the environment of visual perception.

So, by and large, we can talk about multimedia only when an audio component is added to the means of influencing the organs of vision. Of course, at present, computer systems are known that are also capable of influencing human tactile perception and even creating smells inherent in certain visual objects, but so far these applications either have highly specialized applications or are in their infancy. Therefore, it can be argued that today's multimedia technologies are technologies that are aimed at transmitting information, affecting mainly two channels of perception - sight and hearing.

Since in the descriptions of multimedia technologies on print pages, the audio component is unfairly paid much less attention than technologies for the transmission of graphic objects, we decided to fill this gap and asked one of the leading Russian specialists in the field of digital sound recording to talk about how audio is created for multimedia content - Sergei Titov.

ComputerPress: So, we can say that the concept of "multimedia" does not exist without the sound component. Sergey, could you tell us how this particular piece of multimedia content is created?

Sergey Titov: In principle, we perceive about 80% of all information about the outside world with the help of sight and less than 20% with the help of hearing. However, this 20% is impossible to do without. There are many multimedia applications where sound comes first and it sets the tone for the whole piece. For example, most often a video clip is made for a specific song, and not a song is written for a video. Therefore, in the expression “audiovisual series” it is the word “audio” that is in the first place.

If we talk about the audio component of multimedia, then there are two aspects: from the point of view of the consumer and from the point of view of the creator. Apparently, it is the aspect of creating multimedia content that is interesting for a computer magazine, since it is precisely created with the help of computer technology.

Speaking about the means for creating audio content, it should be noted that the production process requires a fundamentally higher resolution for recording files than for the stage of consumption, and, accordingly, a technique of higher quality is required.

Here you can draw an analogy with graphics: a designer can subsequently present a picture in low resolution, for example, for publication on the Internet, and at the same time discard some of the information, but the development and editing process inevitably takes into account all the available information, and decomposed into layers. The same thing happens when working with sound. Therefore, even if we are talking about an amateur studio, then at least we should talk about a semi-professional level technique.

Speaking about the resolution of the system, we actually mean two parameters: the accuracy of the signal amplitude measurement and the sampling rate, or Sampling Rate. In other words, we can measure the amplitude of the output signal very accurately, but do it very rarely and, as a result, lose most of the information.

KP: How does the process of creating a scale take place?

S.T .: Any sound picture is created from some constituent elements. As a DJ in a disco operates with a certain set of initial components, from which he builds a continuous program, so a person engaged in scoring something has some initial materials that he edits and brings into a finished picture. If we are talking about music in its purest form, then the first task is to fix these elements, and then collect them into a single picture. This, in general, is called mixing.

If we are talking about scoring a certain video sequence (in fact, here we can talk about multimedia content), then you need to collect the elements that make up the soundtrack, and then “tie” them to the picture, edit these elements and bring them into mutual correspondence; at the same time, individual elements, about which in question, must be arranged in a convenient way for work.

Computer programs create an interface that contains the same tracks and mixer with a ruler. Each of these lines has its own element, which undergoes one or another modification. Thus, we create some synthesized sound field, operating with the existing elements, and since this task is, in principle, creative, we should be able to modify these elements using various types of processing - from simple editing (cut, sort, glue) to complex when individual elements can be lengthened or shortened, when you can change the character of the sound of each signal.

KP: What software is needed to do this job, and what special computer hardware is needed?

S.T .: Specialized computer hardware is essentially just an I / O board, although certain requirements, of course, also apply to other workstation systems. Software for organizing the process of sound recording and editing exists in a huge number: from cheap amateur to semi-professional and highly professional systems. Most of these programs have a plug-in architecture, require high performance from the computer and sufficiently powerful disk memory subsystems. The fact is that to solve multimedia tasks for production, rather than reproduction of content, machines with large amounts of RAM and a powerful processor are required. The most significant parameter here is not so much the high power of the processor, but the good balance of the machine in terms of the operation of disk subsystems. The latter, as a rule, are SCSI devices, which are preferable in the case when it is necessary to operate with data streams that should not be interrupted. Therefore, IDE interfaces are practically not used. An IDE can have a very high burst transfer rate and a low sustain transfer rate.

At the same time, the IDE interface provides that the disk can send data by storing it in a buffer, and then pumping it out of the buffer. SCSI works differently, and even if the packet transfer rate is slow, the streaming rate will still be fast.

It should also be noted that the aforementioned tasks require very large amounts of disk space. I'll give you a simple example - a 24-bit mono file, even at low sampling rates, for example 44.1 kHz, takes 7.5 MB per track per minute.

KP: Isn't there some kind of technology to store this data more compactly?

S.T .: This is a linear PCM (Pulse Code Modulation) that you can't compress. It can then shrink into MP3, for example, but not at the production stage, but at the distribution stage. At the production stage, we are required to work with linear, uncompressed signals. Let me give you an analogy with Photoshop again. In order to build a graphic composition, a designer must have a complete understanding of what is stored in each layer, have access to each layer and adjust it separately. All this leads to the fact that the PSD Photoshop format takes up a decent amount, but allows you to go back and make corrections to each layer at any time, without affecting all the others. The moment the picture is completely lined up, it can be presented in a different format, compressed with loss or without loss, but, I repeat, only when the production stage is completely completed. The same happens with the sound - you can mix the sound composition only if you have complete information about all the components of the signal.

As I said before, to create a sound image you need a source library corresponding to the task you are working on. Consequently, the video producer is more in need of pre-recorded various noises and effects, and the DJ - the so-called loops (repetitive elements characteristic of dance music). All this material should be stored in the form of files understandable for the appropriate program that works with them. Further, an acoustic system is needed in order to control all this, and the program, accordingly, should make it possible to manipulate this source material, which, in fact, is the creative part of the process. Using a computer system as a means of input-output and a program as a tool, the user, in accordance with his inner instinct, edits the source material: increases or decreases the volume of individual elements, changes the timbre color. As a result of the mixing process, the sound engineer must obtain a balanced sound image that has a certain aesthetic value. As you can see, the analogy with graphics is noticeable even at the terminological level. And whether this picture will be worth something depends entirely on the experience, taste, talent of this sound engineer (of course, subject to the availability of high-quality equipment).

KP: Until now, we had in mind a purely sound picture, however, speaking of multimedia, it is necessary to consider what means exist to bring sound and image together. What is needed for this?

S.T .: Of course, you need a video input-output card, for example, with an output format MPEG or Quick time (if we talk about multimedia, then Quick time will be more convenient).

KP: I think it would be interesting to consider a number of practical tasks for scoring a video sequence and, using specific examples, find out what equipment and what software is required for tasks of various levels of complexity. We could start by analyzing the options for creating an inexpensive presentation film ...

For example, let's consider this case: there is a video film shot with an amateur camera, and the lines and dialogs have already been recorded on the microphone of this camera. Now we need to make an attractive presentation film with semi-professional dubbing based on this. What is needed for this?

S.T .: If we are faced with the task of achieving a certain perception of sound material (even an amateur film), a lot needs to be added to the source material: sound effects, background music, so-called background noises (from the English background - background, background) and so on are needed. Therefore, in any case, it becomes necessary to have several tracks sounding simultaneously, that is, to read several files at the same time. At the same time, we should be able to adjust the nature of the timbre of these files during the production process and edit them (lengthen, shorten, etc.).

It is important to note that the system must provide a way to experiment so that the user can see if a given effect is sounding appropriate for a given location. The system should also allow you to accurately combine sound effects with the sound context, adjust the panorama (when it comes to stereo sound), and so on ...

KP: Well, the problem is clear, and the requirements for the equipment are clear ... Now I would like to get an idea of ​​what specific equipment and what software can be recommended for solving such a problem and how much it will cost the user.

S.T .: In principle, we need some kind of video editor, but this, as I understand it, is a separate topic, and today we must concentrate on the audio component. In any case, in the task that you described above, the audio sequence is subordinate to the video sequence. Therefore, we will assume that we have a video sequence, and will not analyze how it was edited. We consider the initial version, when there is a clean video and a rough audio sequence. In this rough audio sequence, you need to delete some replicas, replace some with new ones, and so on. It doesn't matter if we are talking about a presentation film or an amateur game, we will need to insert some artificial audio effects into it. This is due to the fact that the sound from many events in the frame, recorded using the microphone of the video camera, will sound, as they say, unconvincing.

KP: And where else to get these sounds, if not from real-life events?

S.T .: This is a whole area called sound design, which consists in creating sounds that, when reproduced, would give a convincing sound picture, taking into account the peculiarities of the perception of sounds by the viewer. In addition, there is the so-called dramatic emphasis in the picture of certain sounds that actually sound differently. Of course, if we are talking about amateur cinema and semi-professional dubbing, then some of the opportunities are reduced, but the tasks we face in this case are the same as those of the professionals.

In any case, in addition to editing the draft, it is necessary to add some special effects.

KP: So what kind of equipment do we need to solve this problem?

S.T .: I emphasize once again that we are talking about a semi-professional level, that is, production of an amateur film at home or production of films for cable TV studios, which, in general, are similar tasks. In order to solve most of the tasks of such post-production, you need a Pentium III machine - 500 MHz, preferably 256 RAM, a SCSI disk subsystem; the video subsystem does not play a special role, but it is desirable that some hardware decoders of compressed video be installed there; accordingly, an I / O board is needed, for the simplest amateur work it can be a SoundBlaster. As a relatively cheap complex, you can consider the Nuendo software product, which will work with almost any board, and, for example, the cheap SoundBlaster for $ 150. Of course, here it must be said right away that such a system will have very limited capabilities due to the poor quality of the SoundBlaster board, which has very low quality microphone amplifiers and a very poor quality ADC / DAC.

KP: I would like to hear what allows you to do Nuendo?

S.T .: Nuendo is a software package that has a plug-in architecture and is designed to solve audio production problems, moreover, it is focused specifically on the tasks of creating "audio for video", that is, it can be said, it is designed just for solving multimedia problems. The program works with sound and with images at the same time, while the image for it is a secondary component. Nuendo runs on Windows NT, Windows 98, and BE OS. This program costs $ 887.

The program provides the ability to view video, decomposed in time, and a multitrack system for editing and mixing the sound picture.

A feature of the software package is its flexibility, and you can work on a wide range of inexpensive hardware. It is widely believed that serious systems work only on equipment with specialized DSP coprocessors. The Nuendo software proves the opposite, since it not only provides tools for professional audio production, but also does not require specialized hardware and special coprocessors for its needs.

Nuendo provides 200 mixing tracks and supports surround sound in such a way that many systems look quite pale compared to Nuendo.

Nuendo provides quality processing in real time on the same processor as the workstation itself. Of course, the processing speed will depend on the selected workstation, but the advantage of the program is that it adapts to different processor capacities. Until a few years ago, serious audio processing was unthinkable without DSP. But today, desktop computers have powerful enough in-house processors to handle real-time processing tasks. Obviously, the ability to use a regular computer to solve specific problems without DSP coprocessors adds flexibility to the system.

Nuendo is an object-oriented system (that is, a system that operates with metaphor objects: remote control, indicator, track, etc.), which allows you to easily and fully edit audio files in projects of varying complexity, providing a very convenient and thoughtful interface. Drag-and-drop tools are available for a variety of tasks and are especially heavily used when handling crossfades.

An important feature of the program is the almost unlimited system of Undo & Redo editing functions. Nuendo provides more than just Undo & Redo operations: each audio segment has its own editing history, and the system is organized in such a way that after several hundred Undo & Redo changes, the maximum file size required to store a segment is never more than doubled compared to the original volume.

One of the strongest aspects of the program is the ability to support surround sound. The system has not only the perfect tool for editing the position of the sound source, but also supports multi-channel surround effects.

KP: What are the actions of the user of this program in the process of dubbing?

S.T .: We listen to the soundtrack we already have and see what information we need to delete and what information we need to edit.

KP: If we're talking about an amateur film, how many tracks might we need?

S.T .: In my experience, that's 16-24 lanes.

KP: What can be placed on such a huge number of tracks?

S.T .: Count it yourself: one track is occupied by drafts, the second by special effects, and the third by offscreen music, and this is not only music, but also dialogues, comments, and so on. When all this is put together, you get just that many tracks.

Plus, 16 or even 24 lanes is a relatively small number. In professional films, their number can exceed a hundred.

KP: What other options could you recommend for semi-professional use, say, for the same dubbing of a presentation film at home?

S.T .: An affordable option that I would suggest considering is a combination of the DIGI-001 board and Pro Tools 5 LE software. This option is significantly better in terms of the quality of the I / O board and somewhat poorer in software.

Currently, there is a version for Mac OS and just a few days ago a version for Windows NT is coming out (I hope that by the time this magazine is published, the Windows version of this program will appear in Russia as well). The hardware for Windows and Mac OS is exactly the same.

KP: Can we say that after the release of the version for Windows it will be a cheaper solution due to the fact that the workstation itself will be cheaper?

S.T .: It is a common misconception that a PC sound station is cheaper than a Macintosh solution. But the notion that there are cheap PC-based stations and expensive Macintosh-based stations is also wrong. There are specific systems for solving specific problems, and the fact is that sometimes building a PC-based system for solving problems related to the creation of multimedia content is very difficult, since it is very difficult to assemble a machine from a random set of cheap IBM-compatible parts that would give optimal performance ...

Regardless of the type of workstation that will work in the system, DIGI 001 will provide much wider possibilities than SoundBlaster, and the board costs only $ 995 along with the Pro Tools 5.0 LE "math", that is, in total, about the same as well as the previous solution with the cheapest SoundBlaster.

At the same time, if the solution Nuendo plus SoundBlaster is an option in which the possibilities are limited by a cheap board, and the software has very wide capabilities, then the solution based on DIGI 001 plus Pro Tools 5.0 LE is a much more powerful board, and the software is somewhat more modest in terms of its capabilities than Nuendo. To make it clear what is at stake, let's list the advantages of this solution from the point of view of the I / O board. DIGI 001 is a 24-bit ADC-DAC, the ability to simultaneously listen to 24 tracks, the presence of eight instead of two inputs on the board, etc. So if, for example, during the recording of a presentation, you need to record a scene in which six people take part, speaking into six microphones, then DIGI 001 will cope with this task quite well. Add to that independent monitor output plus 24-bit files, while with Nuendo and the cheap SaundBlaster you can only work with 16-bit files ...

Pro Tools 5 LE allows you to do almost the same thing as Nuendo - to carry out non-linear editing, the same manipulations with audio files, plus there is a mini-sequencer, which also allows you to record music using MIDI instruments.

KP: So how do professional tasks differ from semi-professional ones and what equipment is required for them?

S.T .: First of all, I could talk about the Pro Tools system. In order to warn of possible questions, I want to emphasize again: it is necessary to distinguish between Digidesign Pro Tools as a trademark and Pro Tools as equipment. The Pro Tools brand name covers a whole range of products. The simplest system from this set is DIGI 001, which we talked about when describing semi-professional tasks. This is the simplest option from a whole line of products, which ends with systems operating on the basis of dozens of workstations tied into a single network.

KP: Let's choose an option that can be used for dubbing simple professional films, TV series, and so on.

S.T .: The next system we could consider is Pro Tools 24. To make it clear what tasks this system solves, we note that the last series "Xena" was dubbed with the help of this technique.

Versions are available for both Mac OS and Windows NT. If we talk about the requirements for NT-stations, then it must be a serious machine, for example, IBM Intelli Station M PRO, 512 RAM. The documentation states that the minimum requirements for the processor are Pentium II 233, but in reality you need at least a Pentium II 450 and, of course, a SCSI disk system, and you need a dual-port accelerator to pull 64 lanes at the same time.

Pro Tools 24 is a collection of specialized signal processor boards based on Motorola. It is important to note that this system is based on coprocessors, that is, the machine's processor performs work related to input-output and displaying graphics on the screen, and all signal processing is performed on specialized DSP (Digital Signal Processing) coprocessors. This makes it possible to solve rather complex mixing problems. It is this technology that is used to sound the so-called blockbusters. For example, to sound the Titanic (effects only!), A system of 18 networked workstations was used.

Soundtracks in films like Titanic are stunningly complex, time-varying soundscapes. If you analyze a sound-rich five to ten minute excerpt from such a film and write down all the sounds that are used there, you get a list of hundreds of titles. Of course, all these sounds are not audible from a VHS-level cassette tape, and many do not even suspect how complex the sound image is created in the film. (Moreover, most of these sounds are created synthetically and do not exist in nature.)

KP: You raised the issue of replacing natural sounds with more convincing ones. Where can these sound libraries be purchased and how much do they cost?

S.T .: The cost of such libraries is from fifty dollars and more, up to several thousand dollars. Moreover, all these sounds are mainly used for simple production at the level of cable networks. For professional films, even low-budget ones (not to mention expensive ones), all sounds are recorded independently.

KP: Why are the sounds from the standard library not suitable for a professional film?

S.T .: In principle, I am talking about how this is done in the West, or how it should be done, because in our country, due to poverty, very often they save on what cannot be saved on. The fact is that a feature film reflects a certain individual director's intention, and it is often almost impossible to find sound in libraries that fully corresponds to this intention.

KP: But the sound can be edited, and the possibilities for this, as you say, are very wide?

S.T .: There is such a thing as sound timbre. You can emphasize or weaken some of the components of this timbre, but you cannot radically change it. That is why all the noise for a professional film is recorded from scratch, and professionals do it. Let me give you an example: in the famous film "Batman Returns" there was the sound of Batman's car. Please tell me which library can you find this sound in? Moreover, if we are talking about stereo sound and Surround technology, then each sound picture is simply unique. For example, if a helicopter flies towards the viewer and flies back, it is obvious that such a sound image is tied to the plot. In this case, it is not necessary to record real sounds - most often they are created synthetically.

KP: Why is it impossible to record sounds from real physical processes and present them exactly as they occur in life? Why should you use some other synthetic ones instead?

S.T .: We do not need to accurately recreate the sound of real physical, as you put it, processes. If a bomb explodes three meters from the foreground, then the viewer needs to transmit a sound that is not at all the sound that a soldier who happens to be near the explosion site hears in reality! We must convey a certain conventional picture that will allow the viewer to imagine reality; at the same time, we focus on the peculiarities of his perception, on the artistic accents we need, and so on.

Media components

What is multimedia? Multi - many, Media - medium. This is a human-machine interface that uses various communication channels that are natural for a person: text, graphics, animation (video), audio information. And also more specialized virtual channels that address various senses. Let's take a closer look at the main components of multimedia.

1. Text... Represents sign or verbal information. Text symbols can be letters, mathematical, logical and other signs. The text can be not only literary, the text is a computer program, musical notation, etc. In any case, it is a sequence of characters written in some language.

The words of the text have no apparent resemblance to what they mean. That is, they are addressed to abstract thinking, and in our head we recode them into certain objects and phenomena.

Moreover, the text always has accuracy and concreteness, it is reliable as a means of communication. Without text, information ceases to be specific, unambiguous. Thus, the text is abstract in form, but concrete in content.

A scientific article, an advertisement, a newspaper or magazine, a Web page on the global Internet, a computer program interface, and much more are based on textual information. By removing the text from any of the specified information products, we will actually destroy this product. Even in an advertisement, not to mention brochures, periodicals, books, the main thing is the text. The main goal of the overwhelming number of printed materials is to convey certain information to a person in the form of text.

Text can be more than just visual. Speech is also text, concepts encoded as sounds. And this text is much older than the written one. Man learned to speak before he wrote.

2. Visual or graphic information. This is all the rest of the information coming through sight, static and not encoded into the text. As a means of communication, the image is more ambiguous and indefinite, it does not have the concreteness of the text. But it has other advantages.

a) A wealth of information. With active viewing, the addressee simultaneously perceives a variety of meanings, meanings, and nuances. For example, in a photograph, the expressions on people's faces, from the pose, the surrounding background, etc. can tell a lot. And everyone can perceive the same image in different ways.

b) Ease of perception. It takes much less effort to view the illustration than to read the text. The desired emotional effect can be achieved much more easily.

Graphics can be divided into two types: photography and drawing. Photographically accurate representation of the real world gives the material authenticity and realism, and this is its value. Drawing is already a refraction of reality in the human mind in the form of symbols: curves, shapes, their colors, compositions and others. A picture can have two functions:

a) visual clarification and addition of information: in the form of a drawing, diagram or in the form of an illustration in a book - the goal is the same;

b) creation of a certain style, aesthetic appearance of the publication.

3. Animation or video, that is, movement. Computer animation is most often used to solve two problems.

a) Attracting attention. Any moving object immediately grabs the viewer's attention. This is an instinctive property, because a moving object can be dangerous. Therefore, animation is important as a factor in drawing attention to the most important thing.

At the same time, simple means of attracting attention are enough. So, for banners on the Internet, they usually use elementary, cyclically repeated movements. Complex animations are even contraindicated, since websites are often overloaded with graphics anyway. And this annoys and tires the visitor.

b) Creation of various information materials: videos, presentations, etc. Monotony is not suitable here. It is necessary to control the viewer's attention. And this requires such things as a script, plot, drama, even if in a simplified form. The development of action in time has its own stages and its own laws (which will be discussed below).

4. Sound. Sound information is addressed to another sense organ - not to sight, but to hearing. Naturally, it has its own specifics, design and technical features. Although in the perception of information, you can notice a lot of similarities. The analogue of writing is speech, fine art can to some extent be compared with music, natural, unprocessed sounds are also used.

The essential difference is that static sound does not exist. Sound is always dynamic vibrations of the environment with a certain frequency, amplitude, timbre characteristics.

The human ear is highly sensitive to the harmonic spectrum of sound vibrations, to the dissonance of overtones. Therefore, obtaining high-quality digitized computer sound is still a technically challenging task. And many experts consider analog sound to be more "alive", natural in comparison with digital sound.

5. Virtual channels that appeal to other senses.

So, vibration alert in mobile phone refers not to sight and hearing, but to touch. And this is not exotic, but a widespread channel of information. The fact that someone wants to talk to the subscriber. Tactile (tactile) sensations are also used for other purposes: there are various simulators, special gloves for computer games and for surgeons, etc.

In the 4D cinemas that have recently appeared, the effect of the viewer's presence in the film is achieved by various means that have not been used before: movable chairs, splashes in the face, gusts of wind, smells.

There are even communication and control channels in which nerve cells, the human brain, are directly involved. They are designed for people with disabilities, people with disabilities. After training, a person is able to control the movement of points on the screen with the power of thought. And also (more importantly) mentally give commands that set in motion a special wheelchair.

In this way, virtual reality from fiction is gradually turning into a part of everyday life.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Ministry of Education of the Russian Federation

University of Control Systems and Radioelectronics

Multimedia

and its components

Programming Abstract

Made up

Checked

    • 1. What is multimedia? 3
    • 2. What is CD-ROM? 3
      • 2.1. A bit of history. 4
      • 2.2. CD-ROM drive parameters. 4
      • 2.3. Data transfer rate. 4
      • 2.4. Access time. 5
      • 2.5. Cache memory. 6
    • 3. Video cards. 6
      • 3.1. Monochrome MDA adapter. 6
      • 3.2. CGA color graphics adapter. 7
      • 3.3. Enhanced graphic editor EGA. 7
      • 3.4. VGA adapters. 7
      • 3.5. XGA and XGA-2 standards. eight
      • 3.6. SVGA adapters. eight
    • 4. Sound. 8
      • 4.1. 8- and 16-bit sound cards. eight
      • 4.2. Columns. eight
  • 5. Prospects. 10
  • Tables. 11
  • Literature. 13

1. What is multimedia?

The concept of multimedia encompasses a variety of computer technologies related to audio, video and storage methods. In the most general terms, it is the ability to combine image, sound and data. Basically, multimedia means adding a sound card and a CD-ROM drive to your computer.

The Multimedia PC Marketing Council was created by Microsoft to adopt standards for multimedia computers. This organization created several MPC standards, emblems and trademarks that were allowed to be used by manufacturers whose products comply with the requirements of these standards. This allowed the creation of joint hardware and software products in the field of multimedia for IBM-compatible systems.

The MPC Marketing Council recently handed over its mandate to the Software Publishers Association's Multimedia PC Working Group, which has many member organizations and is now the legislator of all MPC specifications. group, - adopted new MPC standards.

The Council developed the first two multimedia standards, called MPC Level 1 and MPC Level 2. In June 1995, after the creation of the Software Publishers Association (SPA), these standards were supplemented by a third - MPC Level 3. This standard defines the minimum requirements for multimedia -computer (see Table 1, page 11).

Next, let's take a closer look at the individual components (image, sound and data) of multimedia.

1. What's happenedCD- ROM?

A CD-ROM is a read-only optical storage medium that can store up to 650 MB of data, which equates to approximately 333,000 pages of text or 74 minutes of high quality audio, or a combination of both. A CD-ROM is very similar to regular audio CDs, and you can even try to play it on a regular audio player. True, in this case you will only hear noise. Data stored on CD-ROMs can be accessed faster than data stored on floppy disks, but still significantly slower than on modern hard drives. TermCD- ROMrefers to both the CDs themselves and to devices (drives) in which information is read from the CD.

The scope of application of CD-ROMs is expanding very quickly: if in 1988 only a few dozen of them were recorded, today several thousand titles of a wide variety of thematic discs have already been released - from statistical data on world agricultural production to educational games for preschoolers. Many small and large private firms and government organizations produce their own CDs with information of interest to specialists in certain fields.

2.1. A bit of history.

In 1978, Sony and Philips joined forces to develop modern audio CDs. Philips had already developed a laser turntable by then, and Sony had years of research and development in digital recording and production under its belt.

Sony insisted that the diameter of the CDs be 12, and Philips proposed to reduce it.

In 1982, both firms published a standard defining signal processing methods, recording methods, and the size of the disc - 4.72, which is still in use today. The exact dimensions of the compact disc are as follows: outer diameter - 120 mm, diameter of the central hole - 15 mm, thickness - 1.2 mm. It is said that these dimensions were chosen because such a disc contained the entirety of Beethoven's Ninth Symphony. The collaboration of the two firms in the 1980s led to the creation of additional standards regarding the use of technology for recording computer data. Based on these standards, modern compact disc drives have been created. And if at the first stage the engineers worked on how to choose the disk size for the greatest of the symphonies, now programmers and publishers are thinking how to squeeze more information into this little circle.

2.2. CD-ROM drive parameters.

The parameters given in the documentation for CD-ROM drives mainly characterize their performance.

The main characteristics of CD-ROM drives are transfer speed and data access time, the presence of internal buffers and their capacity, and the type of interface used.

2.3. Data transfer rate.

The transfer rate determines the amount of data that the drive can read from the CD to the computer in one second. The basic unit of measure for this parameter is the number of transmitted kilobytes of data per second (KB / s). Obviously, this characteristic reflects the maximum read speed of the drive. The higher the read speed, the better, but remember that there are other important parameters.

According to the standard recording format, 75 data blocks of 2,048 useful bytes must be read every second. In this case, the data transfer rate should be equal to 150 KB / s. This is the standard baud rate for CD-DA devices, also called single speed... The term “single speed” means that CDs are written in constant linear velocity (CLV) format; in this case, the speed of rotation of the disk is changed so that the linear speed remains constant. Since, unlike music CDs, data from a CD-ROM can be read at an arbitrary speed (as long as the speed is constant), it is quite possible to increase it. Today, drives are produced in which information can be read at different speeds, multiples of the speed that is accepted for single-speed drives (see table 2, page 11).

2.4. Access time.

Data access times for CD-ROM drives are determined in the same way as for hard drives. It is equal to the delay between the receipt of the command and the moment the first data bit is read. Access time is measured in milliseconds and its standard passport value for 24x drives is approximately 95 ms. This refers to the average access time, since the real access time depends on the location of the data on the disk. Obviously, when working on the inner tracks of the disc, the access time will be shorter than when reading information from the outer tracks. Therefore, in the passports for drives, the average access time is given, which is defined as the average value when performing several random reads of data from the disk.

The shorter the access time, the better, especially in cases where data needs to be found and read quickly. The access time for data on CD-ROMs is constantly decreasing. Note that this parameter for CD-ROM drives is much worse than for hard drives (100 - 200 ms for CD-ROM and 8 ms for hard drives). Such a significant difference is explained by fundamental differences in design: hard drives use several heads and the range of their mechanical movement is less. CD-ROM drives use a single laser beam and it travels along the entire disc. In addition, the data on the CD is written along a spiral, and after moving the read head to read this track, it is still necessary to wait for the laser beam to hit the area with the necessary data.

The data shown in Table 3 (page 12) is typical of high-end devices. In each category of drives (with the same data transfer rate), there may be devices with a higher or lower access time value.

2.5. Cache memory.

Many CD-ROM drives have built-in buffers, or caches. These buffers are memory microcircuits installed on the board of the drive for recording the read data, which makes it possible to transfer large amounts of data to a computer in one call. Typically, the buffer capacity is 256KB, although both larger and smaller models are available (the more the better!). Typically, faster devices have larger buffers. This is done for higher data rates. The recommended on-board buffer capacity is at least 512KB, which is the standard value for most twenty-four-speed devices.

2. Video cards.

The video card generates monitor control signals. With the advent of the PS / 2 family in 1987, IBM introduced new standards for video systems that almost immediately replaced the old ones. Most video adapters support at least one of the following standards:

MDA (Monochrome Display Adapter);

CGA (Color Graphics Adapter);

EGA (Enhanced Graphics Adapter);

VGA (Video Graphics Array);

SVGA (Super VGA);

XGA (eXtended Graphics Array).

All programs designed for IBM-compatible computers are designed for these standards. For example, within the Super VGA (SVGA) standard, different manufacturers offer different picture formats, but 1024768 is the standard format for rich picture applications.

3.1. Monochrome MDA adapter.

The first and simplest video adapter was a monochrome adapter conforming to the MDA specification. On its board, in addition to the actual display control device, there was also a printer control device. The MDA video adapter provided only display of text (characters) at a horizontal resolution of 720 pixels, and a vertical resolution of 350 pixels (720350). It was a character-driven system; she could not display arbitrary graphic images.

3.2. CGA color graphics adapter.

For many years, the CGA color graphics adapter was the most common video adapter, although its capabilities are now very far from perfect. This adapter had two main groups of operating modes - alphanumeric, or symbolic (alphanumeric - A/ N), and graphic with addressing all points (all point addressable - ADA). There are two character modes: 25 lines of 40 characters each and 25 lines of 80 characters each (both operate with sixteen colors). In both graphics and character modes, 88-pixel matrices are used to generate characters. There are also two graphics modes: color with medium resolution (320200 pixels, 4 colors in one palette out of 16 possible) and black and white with high resolution (640200 pixels).

One of the drawbacks of CGA video adapters is the appearance of flickering and “snow” on the screens of some models. Shimmer manifests itself in the fact that when the text moves across the screen (for example, when adding a line), the characters begin to "wink". Snow are random flashing dots on the screen.

3.3. Enhanced graphic editor EGA.

The advanced EGA graphics editor, which was discontinued with the PS / 2 computers, consisted of a graphics card, an image memory expansion card, a set of image memory modules, and a high-resolution color monitor. One of the advantages of EGA was the ability to build a system in a modular fashion. Because the graphics card worked with any of the IBM monitors, it could be used with both monochrome monitors and conventional color monitors of earlier models, and higher resolution color monitors.

3.4. VGA adapters.

In April 1987, concurrently with the release of the PS / 2 family of computers, IBM introduced the VGA (Video Graphics Array) specification, which soon became the generally accepted standard for PC display systems. In fact, on the same day, IBM released another specification for low-aspect ratio display systems, MCGA, and launched the IBM 8514 high-expansion video adapter. MCGA and 8514 adapters did not become widely accepted standards like VGAs, and soon fell off the scene.

3.5. XGA and XGA-2 standards.

At the end of October 1990, IBM announced the release of the video adapter XGA Display Adapter/ A for the PS / 2 system, and in September 1992, the release of the XGA-2. Both devices are high quality 32-bit adapters with the ability to transfer bus control to them (bus master) designed for computers with MCA bus. Designed as a new variety of VGA, they provide higher resolution, more colors and significantly better performance.

3.6. SVGA adapters.

With the advent of the XGA and 8514 / A video adapters, competitors of IBM decided not to copy these VGA resolutions, but to start producing cheaper adapters with a resolution that is higher than the resolution of IBM products. These video adapters have formed a category Super VGA, or SVGA.

SVGA's capabilities are broader than VGA cards. Initially, SVGA was not a standard. This term meant many different designs of various companies, the requirements for the parameters of which were more stringent than the requirements for VGA.

4. Sound.

4.1. 8- and 16-bit sound cards.

The first MPC standard provided for “8-bit” audio. This does not mean that sound cards had to be plugged into an 8-bit expansion slot. The bit depth characterizes the number of bits used to digitally represent each sample. With eight bits, the number of discrete levels of the audio signal is 256, and if you use 16 bits, then their number reaches 65,536 (while, of course, the sound quality much improves). 8-bit representation is sufficient for recording and playback speeches, but music requires 16 bits.

4.2. Columns.

Successful commercial presentations, multimedia, and MIDI require high quality stereo speakers. The standard columns are too large for the desktop.

Sound cards often do not provide enough power for the speakers. Even 4 watts (like most sound cards) is not enough to "rock" high-end speakers. In addition, conventional speakers create magnetic fields and, when installed near a monitor, may distort the screen image. The same fields can spoil the information recorded on the floppy disk.

To solve these problems, speakers for computer systems need to be small and efficient. They must be provided with magnetic protection, for example, in the form of ferromagnetic shields in the housing or electrical compensation of magnetic fields.

Today, dozens of speaker models are produced, from cheap miniature devices from Sony, Koss and LabTech to large self-powered units such as Bose and Altec Lansing. To assess the quality of a speaker, you need to have an idea of ​​its parameters.

Frequency response (frequency response). This parameter represents the frequency range reproduced by the speaker. The most logical range would be from 20 Hz to 20 kHz - it corresponds to the frequencies that the human ear perceives, but no speaker can perfectly reproduce the sounds of this entire range. Very few people hear sounds above 18 kHz. The highest quality speaker reproduces sounds in the frequency range from 30 Hz to 23 kHz, while the cheapest models limit the sound to the range from 100 Hz to 20 kHz. Frequency response is the most subjective parameter, since the same, from this point of view, speakers can sound completely different.

Harmonic distortion (TDH - Total Harmonic Distortion). This parameter determines the level of distortion and noise that occurs during signal amplification. In simple terms, distortion is the difference between the audio signal being fed to the speaker and the audible sound. Distortion is measured as a percentage and a 0.1% distortion is considered acceptable. For high quality equipment, the standard is a distortion level of 0.05%. Some speakers have distortion as high as 10%, and headphones - 2%.

Power. This parameter is usually expressed in watts per channel and refers to the electrical power output delivered to the speakers. Many sound cards have built-in amplifiers with up to 8 watts per channel (typically 4 watts). Sometimes this power is not enough to reproduce all shades of sound, so many speakers have built-in amplifiers. Such speakers can be switched to amplify the signal coming from the sound card.

3. Perspectives.

So, there is clearly a multimedia boom in the world. At such a pace of development, when new directions appear, and others, which seemed very promising, suddenly become uncompetitive, it is even difficult to draw up reviews: their conclusions may become inaccurate or even outdated after a very short time. Forecasts of the further development of multimedia systems are all the more unreliable. Multimedia significantly increases the quantity and quality of information that can be stored in digital form and transmitted in the "man-machine" system.

Tables.

Table 1. Standards of multimedia.

CPU

75 MHz Pentium

HDD

Floppy drive

3.5-inch by 1.44 MB

3.5-inch by 1.44 MB

3.5-inch by 1.44 MB

Storage device

One-shot speed

Double speed

Quadruple speed

VGA adapter resolution

640480,

640480,

65536 colors

640480,

65536 colors

Ports

I / O

Serial, Parallel, Game, MIDI

Serial, Parallel, Game, MIDI

Software

Microsoft Windows 3.1

Microsoft Windows 3.1

Microsoft Windows 3.1

Date of adoption

Table 2. Data transfer rates in CD-ROM drives

Drive type

Data transfer rate, byte / s

Data transfer rate, KB / s

Single speed (1x)

Two-speed (2x)

Three-speed (3x)

Four-speed (4x)

Six-speed (6x)

Eight speed (8x)

Ten-speed (10x)

Twelve-speed (12x)

Sixteen-speed (16x)

Eighteen-speed (18x)

Thirty-two-speed (32x)

Sto-speed (100x)

1 843 200 - 3 686 400

Table 3. Standard access times to data in CD-ROM drives

Drive type

Data access time, ms

Single speed (1x)

Two-speed (2x)

Three-speed (3x)

Four-speed (4x)

Six-speed (6x)

Eight speed (8x)

Ten-speed (10x)

Twelve-speed (12x)

Sixteen-speed (16x)

Eighteen-speed (18x)

Twenty-four-speed (24x)

Thirty-two-speed (32x)

Sto-speed (100x)

Literature.

Scott Mueller, Craig Zecker. PC modernization and repair. - M.: Publishing house "Williams", 1999. - 990 p.

S. Novoseltsev. Multimedia - the synthesis of three elements // Computer Press. - 1991, no. 8. - p. 9-21.

Similar documents

    Scopes of multimedia. Main carriers and categories of multimedia products. Sound cards, CD-ROMs, video cards. Multimedia software. The order of development, operation and application of information processing tools of different types.

    test, added 01/14/2015

    A special electronic board that allows you to record sound, play it back and create it using software using a microphone. The amount of memory for video adapters. The main characteristics of scanners. Optical resolution and density, color depth.

    abstract, added 12/24/2013

    The main nodes. Video cards of the MDA standard. Hercules monochrome adapter And other video adapters: CGA, EGA, MCGA, VCA, XGA, SVGA and VESA Local Bus. Hardware 2D accelerator. Testing video cards. technological changes in the filling and design of the boards.

    abstract, added 11/14/2008

    Different kinds definitions of the term "multimedia". Multimedia technologies as one of the most promising and popular areas of informatics. Multimedia on the Internet. Computer graphics and sounds. Various fields of application of multimedia.

    term paper added 04/19/2012

    Using professional graphic examples. Application of multimedia products. Linear and structural presentation of information. Multimedia resources of the Internet. Multimedia computer software. Creation and processing of images.

    term paper added 03/04/2013

    Potential capabilities of the computer. Wide application of multimedia technology. The concept and types of multimedia. Interesting multimedia devices. 3D glasses, web cameras, scanner, dynamic range, multimedia and virtual laser keyboard.

    abstract, added 04/08/2011

    Microsoft operating system with customizable interface - Windows XP. The work of standard applications: notepad, graphics editor Paint, word processor WordPad, calculator, data compression, compression agent, standard multimedia.

    test, added 01/25/2011

    Theoretical aspects of the Delphi programming environment. The essence of the concept of a life cycle, characteristics of the spiral model. Purpose of the "Graphic editor" program, its main functions. Working with a graphical editor, documenting the program.

    term paper, added 12/16/2011

    Characteristics of the graphical capabilities of the Lazarus programming environment. Analysis of properties of Canvas, Pen, Brush. The essence of the methods for drawing an ellipse and a rectangle. Features of the components Image and PaintBox. Implementation of the "Graphic Editor" program.

    term paper added 03/30/2015

    Characteristics of the video card. The graphics processor is the heart of the video card, which characterizes the speed of the adapter and its functionality. Development of an instructional and technological card for the repair of video cards. Repair of a video card at home.

Sound is the most expressive element of multimedia. The world of sounds surrounds a person constantly. We hear the sound of the surf, rustle of foliage, rumbling of waterfalls, birdsong, cries of animals, voices of people. All these are the sounds of our world.

The history of this element of information for humans is as ancient as the previous ones (text, image). Initially, man created devices with which he tried to reproduce natural sounds for his practical purposes, in particular for hunting. Then the sounds in his head began to form a certain sequence that he wanted to keep. The first musical instruments appeared (one of the oldest is the Chinese krin). Gradually, there was a process of forming a language in which it would be possible to record and thereby preserve the born melodies for a long time. The first attempts to develop such a "musical alphabet" were made in Ancient Egypt and Mesopotamia. And in the form in which we know it now (in the form of musical notation), the system of fixing music took shape by the 17th century. Its foundations were laid by Guido d'Arezzo.

At the same time, there was an improvement in sound recording and storage systems. A person has learned to save and reproduce not only music, but also any surrounding sounds. The sound was first recorded in 1877 on a phonograph invented by Thomas Edison. The recording looked like indentations on a sheet of paper attached to a rotating cylinder. Edison was the first to teach his car to loudly say "hello" into the microphone. This word was heard when the needle, connected to the microphone, repeated the recording made on the paper. The mechanical-acoustic recording method lasted until the 1920s, when electrical systems were invented. Practical application Sound recording was also facilitated by two revolutionary inventions:

· Invention of plastic magnetic tape in 1935;

· The rapid development of microelectronics in the 60s.

The rapid development of computer technology has given this process a new impetus for development. The world of sounds gradually merged with the digital world.

There are two main methods of sound synthesis in sound cards:

wavetable synthesis(WaveTable, WT), based on the playback of samples - pre-digitally recorded sounds of real instruments. Most sound cards contain a built-in set of instrument sounds recorded in ROM, some cards allow the use of records that are additionally loaded into RAM. To obtain the sound of the desired pitch, a change in the recording playback speed is used, complex synthesizers are used to reproduce each note, parallel playback of different samples and additional sound processing (modulation, filtering).



Dignity: Realistic sound of classical instruments, easy to obtain sound.

Flaws: a rigid set of pre-prepared timbres, many of whose parameters cannot be changed in real time, large amounts of memory for samples (sometimes up to hundreds of KB per instrument), unequal sound of different synthesizer models due to different sets of standard instruments.

frequency modulation(Frequency Modulation, FM) - synthesis based on the use of multiple signal generators with intermodulation. Each generator is controlled by a circuit that regulates the frequency and amplitude of the signal and is the basic unit of synthesis - the operator. Sound cards use two-operator (OPL2) and four-operator (OPL3) synthesis. The operator connection scheme (algorithm) and the parameters of each operator (frequency, amplitude and the law of their change in time) determine the sound timbre. The number of operators and their control scheme determine the maximum number of synthesized tones.

Dignity: no need to pre-record the sounds of instruments and store them in ROM, the variety of sounds obtained is great, it is easy to repeat the timbre on various boards with compatible synthesizers.

Flaws: it is difficult to provide a sufficiently euphonious timbre in the entire sound range, imitation of the sound of real instruments is extremely rough, it is difficult to organize fine control of operators, which is why a simplified scheme with a small range of possible sounds is used in sound cards.

If the sound of real instruments is needed in the composition, the method of wave synthesis is better suited, while the method of frequency modulation is more convenient for creating new timbres, although the capabilities of the FM synthesizers of sound cards are rather limited.

Share this