02. Basics of Broadcasting

Posted by Unknown on |

2.1 Types of Radio Broadcasting
      Broadly speaking, Broadcasting means electronic transmission of radio and television signals that are intended for general public reception. It is also known as the systematic dissemination of entertainment, information, educational programming and other features for simultaneous reception by a scattered audience with appropriate receiving apparatus. Broadcasts may be audible only, as in radio, or visual or a combination of both, as in television. Sound broadcasting is said to have started in 1920s, while television broadcasting began in the 1930s. With the advent of cable television in the early 1950s and the use of satellites for broadcasting beginning in the early 1960s, television reception improved and the number of programs receivable increased dramatically.
      Marconi invented Radio in 1890s. It was especially used for communication between ships. Since then, radio broadcasting has taken different evolutionary paths. Long Wave, Medium Wave, Short Wave and FM bands are popularly used so far. In recent years we have seen the introduction of digital technology, information technology and advancement in digital signal processing. Digital radios and internet radios are the specific outcomes exploiting these advanced techniques.
      Traditionally, there are two types of radio stations: AM and FM. Amplitude Modulation (AM) transmission is carried in Long Wave (LW), Medium Wave (MW) and Short Wave (SW) bands. Frequency Modulation (FM) transmission falls in VHF band at 87-108 MHz band.
      From the beginning, Radio Broadcasting was based on Analog techniques. But more than 20 years, there has been a steady migration to digital systems. The increasing use of computer based information technology and digital equipment have revolutionized production studios and transmission facilities.
      Normally, a single standard has been adopted around the world in radio broadcasting based in analog technology. For AM, The radio wave emission standard is A3E and Double Side Band (DSB) transmission. The transmission in FM also adopt DSB method. But the bandwidth and power used in transmission varies as per the frequency band of transmission and local regulations in different countries. AM transmission regulation takes place, generally, as per international coordination; Whereas FM transmission is regulated more by individual countries.

2.1.1 AM Radio
      In Amplitude Modulation (AM) technique, the information is carried on a radio frequency carrier wave by modulating its amplitude characteristics. In AM, the program audio signal is used to modulate the amplitude of the carrier wave that will be transmitted by Antenna. As the instantaneous amplitude of the program signal increases up to its maximum, the carrier amplitude varies accordingly up to the maximum ie. 100 percent modulation. Generally, the modulation level is kept between 40 to 80 percent for maintaining good audio quality. The RF signal modulated less than 40 percent will be too weak and noisy. Similarly, it will sound harsh and distorted if the modulation level exceeds 100 percent.
      In AM transmission, each channel of a radio station allows 9KHz or 10KHz bandwidth. In America, a 5 KHz wide audio signal is modulated and the final RF bandwidth will be 10KHz. But in other countries like Nepal, 4.5 KHz wide audio signal occupies 9KHz RF bandwidth. For example, if a station is broadcasting in 792KHz, then the other stations nearer to the site will be allocated either 783 or 801KHz. The MW band in America occupies the range between 525 to 1605 KHz for 108 different channels. Similarly, in other countries, the MW band takes 526-1606  KHz range for 120 different channels.
      AM transmission is normally used for long distance or regional broadcasting. They utilize more RF power and consume more electricity. MW signals are ground wave signals and travel over the earth's surface at ground level. How far the signal travels, for a given transmitter power and antenna, is largely a function of the earth's ground conductivity over the route being traveled by the signal. AM transmission in SW band utilizes sky wave signals. Sky wave is a phenomenon that occurs when radio signals bounce off the ionosphere layer high in the earth's atmosphere. Thus, SW signals are capable of traveling from its transmitter to a distant city or place. During the daytime, a lower layer of the earth's atmosphere generated by the sun's rays absorbs the MW radio waves and so, only the ground wave signals are left. This is the reason why a distant MW station signal interferes with other MW station signal far from each other at night.
      The wavelength for AM transmission is several meters depending on the frequency. Therefore, an AM radio station's transmitting antenna is usually the station's tower. The actual metal structure of the tower is energized with the RF energy from the transmitter. The height of the tower depends on the transmitting frequency of the station. Normally it is a 1/4 wave monopole antenna. The tower will have an unidirectional horizontal radiation transmitting the same amount of energy in all directions. Sometimes multiple AM towers are used together for directional antenna system for short wave transmission. The purpose of such array antenna is to direct the transmitted energy towards the targeted area and no reduce the energy traveling in other directions.

2.1.2 FM Radio
      In Frequency Modulation (FM), the program audio signal is used to modulate the frequency of the carrier wave that will be transmitted by antenna. For FM broadcasting, the range of RF frequencies is 87 to 108 MHz. Carriers are assigned to channels which are spaced normally at 200KHz intervals to avoid interference to other station signals. In Kathmandu, the spacing is around 600KHz as it depends on the number of channels operating in a city and frequency deviation each station is producing. The maximum deviation allowed for 100 percent modulation is plus and minus 75KHz. FM produces sidebands, lower and upper, extending much more than plus or minus 75 KHz from the carrier frequency.
      FM signals do not propogate by either ground waves or sky waves, but by space waves. As radio frequencies get higher, they become dependent on having a line of sight view between the transmitter and the receiver. So, FM transmission is employed for local broadcasting as it can not travel over a long distance. In such cases, the transmitter power used is far less than by AM transmitters and consumes less electrical energy. AM and FM antennas are different by construction and size. Antennas are the device for radiating radio frequency signals into the atmosphere for reception by receivers.
      The FM antennas will be only few feet long as they employ high frequency. Dipole elements are used for radiation. For increasing the gain of the antenna, 2/4/6/8 bay dipoles are used. Multiple bay antennas make the transmission system more efficient by changing the vertical radiation pattern. This focuses the energy beam and reduces the amount of energy sent up into the sky.  Multiple antennas in an FM array are stacked vertically on the tower whereas in AM, Multiple Arrays are lined up horizontally along the ground.

2.2 Analog Color TV System and Standards
      There were no single international TV standards in the initial stages of television developments. Three different monochrome systems, also called as Black and White, developed independently. These were 525 line American, 625 line European and 819 line French systems having a different channel bandwidth. Later, these three B&W TV systems were developed as the three corresponding color TV systems. These are NTSC, PAL and SECAM systems. Each standard defines the format of the video that carries the picture information and how the video and audio signals are transmitted.

2.2.1 NTSC
      The National Television Systems Committee (NTSC) standard is used widely in North America and Japan. It is based on 525 lines per frame and 30 frames per second scanning systems. NTSC is broadcast over the air on VHF and UHF bands, each occupying 6MHz of the total spectrum. It can also be carried on analog cable and satellite delivery systems. In the US, NTSC is being phased out and replaced by ATSC digital television.
      In this system, two color difference signals, in phase (I) and quadrature phase (Q), are generated and assigned different bandwidths to them. I represents the matrix of 0.74(R-Y)-0.27(B-Y) or 0.6R-0.28G   -0.32B component matrix. Likewise, Q represents the matrix formation equivalent to 0.48(R-Y) +0.41(B-Y) or 0.21R-0.5G +0.31B. A bandwidth of 2 MHz is necessary for color signal transmission. Thus the luminance video with 4MHz and 2 MHz wide chrominance video is convenient to accomodate in the 6MHz channel bandwidth. In NTSC, the color sub carrier frequency must be an odd multiple of half the line frequency to suppress dot pattern interference. The major difference between a monochrome and color receiver is that the latter has an additional circuit block called color decoder or chroma subsystem.

2.2.2 PAL
      It is widely adopted in Europe, Australia and other parts of the world and known as Phase Alternating Lines (PAL). PAL refers to the way the color information is carried on alternating lines. The color coding and picture structure is different here with 625 lines per frame and 25 frames per second scanning systems. The PAL color system uses U and V components to transmit the color information. U represents the weighted B-Y signal. i.e. 0.493(B-Y) and V represents the weighted R-Y signal i.e. 0.877(R-Y). The switching of V signal from +90° to -90° occurs on alternate lines during the blanking interval to avoid any visible disturbances.
      In addition to the color U and V components, the color burst is transmitted as well. It has two functions; one is to keep the sub carrier oscillation synchronized in the receiver and secondly, to ensure V signal switching sequence at the receiver remains same as in the transmitting end. In the NTSC system, the burst phase is long - (B-Y) axis and has only one  component. However, in PAL it has two components; -U component and ±V component on successive lines. These are of equal amplitudes and so chosen that the resultant burst amplitude is the same as in NTSC system.
      The main emphasis, in PAL system, is on overcoming differential phase errors that cause distortion in the reproduced colors. It is cancelled by shifting one of the color difference phases by 180° on alternate lines of scanning. At the receiver on demodulation, the same phasor is shifted back by 180° to its original position.

2.2.3 SECAM
      The SECAM, Sequential Couleur Avec Memoire, is also known French Color TV systems. It is used in France, Russia and some other eastern European countries. In SECAM system, only one of the two color difference signals is transmitted at a time. But in NTSC and PAL systems, the two chrominance signals are transmitted and received simultaneously. It consists of a 625 lines, 50 fields, 8MHz version of the system (SECAM III). The picture subcarrier is frequency modulated and magnitude of the frequency deviation represents saturation of color.
      If the nth line carries R-Y signal during one picture field, it will carry B-Y signal during scanning of the following field. At the receiver, a delay line of 64 microseconds is used as a one line memory device to produce decoded output of both the color difference signals simultaneously. The SECAM color system uses DR and DB components to transmit the color information. DB represents the weighted B-Y signal i.e. 1.5(B-Y) and DR represents the weighted R-Y signal i.e. 1.9(R-Y).

2.3 Digital Audio and Video
      Higher audio and video quality, greater production capabilities and more efficient operations are archived by using digital signals in the studio. The terms digital radio and digital television are usually used to mean the method of transmission of the audio and video signals. Digital audio generally refers to audio signals that have been converted into a series of binary numbers (using the digits 0 and 1) rather than being sent a continuously variable analog waveform. The advantage of this scheme is that binary numbers can be easily processed by computers and other digital equipment. It can also be distributed and recorded with great accuracy. Digital audio is typically not subject to any of the degradation of analog audio such as noise and distortion. The digital signals thus produced in the studio may feed either analog or digital transmission systems.
      There are five basic terminologies that define digital audio: sampling, quantizing, resolution, bitstream and bit rate. In an analog-to-digital converter, the amplitude of the wave is measured at regular intervals and the process is called sampling. Each individual sample represents the level of the audio signal at a particular instant in time. The sampling rate is the rate at which digital samples are made from the original material. The more often the original material is sampled, the more accurately the digital signal represents the original material. Audio compact discs(CDs) have a digital sampling rate of 44.1KHz, but most broadcast digital audio studio equipment uses 48KHz sampling. The resolution of digital audio is the precision with which the sampled audio is measured. CDs and many other items of equipment use 16 bits to represent each audio sample. When binary numbers representing the audio samples are sent down a wire one after the other, the stream of binary digits (bits) is referred to as a serial bitstream. The bit rate necessary to transport a digital audio signal is directly related to the digital resolution of the digital audio and its sampling rate. Using the digital resolution and the sampling rate for CDs, the bit rate necessary to transport CD audio can be calculated. For example, if CD digital resolution is 16bits/sample and CD sampling rate is 44,100 samples/second, then for 2 channels stereo CD Audio;
Bit rate = 44,100×16×2 = 1.41 Mbps
Likewise, the process of grabbing a piece of the video information and holding it is called sampling and the process of changing that sample into a number is called quantizing. In the analog video capture process, the bright images on the face of the CCD are changed into higher voltages, and the darker images one the CCD are turned into lower voltages. These are then reproduced on the CRT where the higher voltages create brighter images and the lower voltages create darker images. The quality of the picture is limited and can easily be degraded by some equipment. Using computer or digital hardware, we can improve the quantity of the picture, maintain that quality throughout the process and create images and effects. Thus, digital video is a process that uses computer technology and software to create, store and transmit video images.

2.3.1 Digital to Analog Conversion
      To display a picture, the stream of digital information (numbers) must be converted back to a stream of analog voltages. This is necessary so that the electron gun at the back of the CRT can shoot a stream of electrons to spray the picture onto the phosphors on the face of the CRT. This takes place in the digital-to-analog (D to A) converter.

      Most digital equipment will produce both analog and digital outputs. If we connect the analog output of one piece of digital equipment to the analog input of another piece of digital equipment, they will communicate each other and they remain compatible. In the first piece of equipment, the digital signal will go through a D to A converter to become analog. As soon as it is in the next piece of equipment, it must go through an A to D converter to become digital. It would be much simpler to go directly from digital out to digital in. The coder and decoder (CODEC) make it become possible. To ensure the compatibility, same CODEC should be used in the interconnecting equipment.

2.4 Radio Wave Propagation
      When electric power is applied to a circuit, a system of voltage and currents is set up in it. The relation between voltage, current at any time and at a certain point is governed by the properties if the circuit itself. If the power is radiated into free space on purpose, the wave is propagated in the form of electromagnetic wave (EM) and depends on the characteristics of the free space. EM waves are energy propagated through free space at the velocity of light.
      There exist two major components of EM waves namely electric field and magnetic field components. The direction of propagation are mutually perpendicular to each other. If the propagating medium is free space, it doesn't interfere with the normal radiation and propagation of radio waves. Since no interference of obstacle is present in free space. EM waves will spread uniformly in all directions from a point source. The total radiated power per unit area is termed as the power density, and is inversely proportional to the square of the distance from the source.
p = Pt / 4∏r²
      Where, p is the power density at a distance r from an isotropic source and Pt is the transmitted power.
      The electric and magnetic field intensities of EM waves are the direct counterparts of voltage and current in circuits. For EM waves,
E = ηH
Where, E = Electric field strength, V/m
            H = Magnetic field strength A/m
             η = Characteristics impedance of a medium, Ω
     For free space, η = 120∏Ω

2.4.1 Behavior of EM Signals
      The nature and propagation of radio waves depends mainly on the composition of the medium they travel. Different properties and behaviors of radio waves such as reflection, refraction, diffraction, attenuation and absorption result due to the effect of the environment. Some of these phenomenons in EM waves are.
a) Polarization: It refers to the physical orientation of the electric field component of the radiate waves in space. It is a characteristic of antennas that the radiation they emit is polarized in some form. Polarization may be in any one of Linear (eg. vertical or horizontal) or circular or eppliptical (eg. right hand or left hand). A vertical antenna will radiate waves whose electric field vectors will all be vertical and will remain so in free space. Similarly, horizontal antennas produce waves whose polarization is horizontal.
b) Radiation: Antennas radiate EM waves. The radiation is predicted mathematically by the Maxwell equations. It shows that current flowing in a wire produces a magnetic field around it and if the magnetic field is changing, an electric field will also be present. The electric and magnetic field is capable of leaving the current carrying wire which depends on the relation of its length and wavelength of the current.
c) Reception: As in the case of radiation, a wire placed in a moving EM field will have a current induced in it and this wire receives some of the radiation. As the principle of reciprocity states that the characteristics of antennas, such as impedance and radiation pattern, are identical regardless of use for reception or transmission. So the receiving and transmitting antennas are virtually identical.
d) Attenuation: EM Waves are attenuated as they travel outward from their source and this attenuation is proportional to the square of the distance traveled. It is normally measured in decibels. ie.α = 20 log (r2/r1)
e) Absorption: In free space, absorption of radio wave does not occur. However, the atmosphere tends to absorb some waves, because some of the energy of the waves is transferred to the atoms and molecules of the atmosphere. But the atmospheric absorption of EM waves of frequencies below about 10 GHz is quite insignificant.
f) Other Effects of Environment: For the propagation to occur near the earth surface, several other factors exist which do not exist in free space. The waves are reflected by the ground, mountains and buildings. They will be refracted as they pass through layers of the atmosphere which have different degrees of ionization. They may also be diffracted around tall, massive object. They may even interfere with each other, when two waves from the same source meet after having traveled by different paths.

2.4.2 Ground Waves, Sky Waves and Space Waves
In an earth environment EM waves propagate in ways that depend not only on their properties but also on those of the environment itself. Waves travel in straight lines normally. Except in unusual circumstances, frequencies above HF generally travel in straight lines. They propagate by means of so called space waves or tropospheric waves. Frequencies below the HF range travel around the curvature of the earth or along the surface of the earth. These ground waves are one of the means of beyond the horizon propagation. Waves in HF range are reflected by the ionized layers of the atmosphere and are called sky waves. Such signals are beamed into the sky and come down again after reflection, returning to the earth well beyond the horizon.

2.4.3 AM Propagation and Ground Waves
      Normally frequencies below the HF range travel along the surface of the earth and are vertically polarized to prevent short circuiting the electric field components. The wave induces currents in the ground over which it passes and thus losses some energy by absorption. This is made up by energy diffracted downwards from the upper portion of the wave front. At some distances from the antenna , the wave tilts, lies down and dies. Radiation from an antenna by means of the ground wave gives rise to field strength at a distance. It is given by,
Vr = (120πhtl/ λ) hr (volts)
ht, hr is height of the transmitting and receiving antenna respectively.
l the feeder current, λ is the wavelength of the wave
and d is the distance where field strength is measured
      Signal strength reduction is due to attenuation at a distance, ground and atmospheric absorption, salinity and resistivity of the propagating surface and water vapor content of the air. In the LF and VLF propagation, the angle of the tilt is the main determining factor if propagates over a good conductor. Here the ground and bottom of the E layer in the lonosphere are said to form a waveguide and Low frequency waves do not suffer rapid fading making it reliable for communication over long distance. High power transmitters and tallest masts are used for such communications.

2.4.4 AM Propagation and Sky Waves
      The atmosphere receives sufficient energy from the sun to get ionized and various layers in the ionosphere have specific effects on HF propagation. Variation s in temperature, density and composition stratifies the ionosphere into D, E, F1 & F2 layers. The F1 & F2 layers combine at night to from one single F layer. The lowest D layer around at 70KM, which disappears at night, reflects LF waves but absorbs MF and HF waves. The E layer around at 100KM aids in reflection of MF surface waves and HF waves in daytime. The F1 layer at 180KM reflects some HF waves and most of the passed waves through F1 layer are reflected by F2 layers. But at night they combine into a single F layer which is the important reflecting medium. The ionization density and the layer's height depend on the time of the day, average temperature and the sunspot cycle. Due to the disappearance of absorbing D and E layers and existence of a single F layer at night better HF reception is possible. A wave will be end downward provided the rate of change of ionization density per wavelength is sufficient.

2.4.5 Reflection Mechanism in Ionosphere
      EM waves return to earth by one of the layers by refraction. The incident wave is gradually bent farther and farther away from the normal due to decreasing refractive index value of ionize layers. The virtual height of a layer is the height appeared to be reflecting the wave from a surface located at a greater height than the actual height from where the waves are bent down gradually and reflected thereafter.
      The critical frequency (fc) for a given layer is the highest frequency that will be returned down to earth by that layer after having been beamed normally at it. It is in the range of 5 to 12 MHz ofr F2 layer. Maximum Usable Frequency (MUF) is a limiting frequency for specific angle of incidence (θ) to enable the HF waves to return back to earth. It may lie within 8 to 35 MHz. Mathmatically,
MUF = fc secθ
      Frequencies much below MUF are not used for transmission. The skip distance is the shortest distance from a transmitter, measured along the surface of the earth, at which a sky wave of fixed frequency will be returned to earth. Fading is the fluctuation in signal strength at a receiver due to interference between two waves which left the same source but arrived at the destination by different paths. It may be rapid or slow, general or frequency selective. Fading may occur because of interference between the lower and upper rays, between waves arriving by a different number of hops at different paths, between a ground wave and sky wave at lower HF band.

2.4.6 VHF/UHF Propagation:
      Frequencies at VHF and UHF band travel in straight lines. They depend on line of sight conditions and, normally, are limited in their propagation by the curvature of the earth. They propagate very much like electromagnetic waves in free space. The radio horizon for space wave is about four thirds as far as the optical horizon. This is beneficial for covering areas beyond line of sight and is a result of varying density of the atmosphere which diffracts the waves around the curvature of the earth. Thus, to extend the radio horizon, antennas are located on tops of mountain.
      Any tall or massive objects will obstruct space waves since they travel close to the ground. As a consequence, shadow zones and diffraction will result. The diffraction makes the coverage of some of the shadow region possible. Likewise, some areas receive such signals by reflection. This creates a form of interference, called ghosting, and may be observed on the screen of a TV receiver. Besides the regular effects of atmosphere to space waves, atmospheric absorption, effects on precipitation and ducting effects are observed as well. In some cases the change in refractive index is abrupt and a layer of warm air may be trapped above the cooler air. This results in a phenomenon knows as ducting. Microwaves are thus continuously refracted in the duct and reflected by the ground and exceeding 1000KM distances. Figure 2.2 shows different types of EM waves including the ionosphere layer.


2.5 Audio Production Technologies
2.5.1 Disc Recording
      The long history of the analog vinyl LP disk has come to an end as it is being replaced by the digitally recorded CD. CD recorded sound is quieter, has less distortion and does not wear out. But analog recording has existed over hundred years and perhaps billions of LP disks are still in archives, music libraries and radio stations. As the content of all these analog records are very important and can never be completely recorded onto CDs, we must preserve, restore and reproduce analog recordings.
      In 1871, Thomas A Edison invented a cylindrical phonograph. It was capable of recording sounds by means of converting vibrations of air into an engraved groove in the aluminum foil that covered a rotating cylinder. Later models of phonograph used special wax as a recording surface and recordings were made acoustically and reproduced by mechanical pickups through acoustical horns.
       The disk recording set up use recording materials, cutter heads, turntables, pickups & recording techniques. The analog disk records have diameters 7-, 10-,12- inch with 331/3 and 45 revolutions per minute. The groove dimensions can be about 0.5mm and audio is recorded into the groove in such a manner that they can be reproduced by movements of a stylus tip. A cutting head in an EM transducer translates electrical waveform into mechanical vibrations. These vibrations are applied to the cutting stylus, which in turns cuts & modulates a record groove on the surface of the disk. The first cutting heads were monophonic and operated using electromagnetic and piezoelectric principles.
      A special precision turntable rotates the disk to be cut at the precise speed. The cutting head uses negative feedback to obtain a uniform frequency response and reduced distortion while matching impedance with power amplifier to that of the cutter. A recording stylus is a sharp pointed gauge shaped instrument for engraving a sound track on a cylindrical or disk record. The tip may be a precious stone or metallic sapphire.
      To play a record, the turntable or device to rotate the disk at the required speed is needed. The turntable platters can be belt driven or driven direct. Many turntables have synchronous motors. In the direct type, the motor is driving the shaft of the platter directly. A stroboscopic disk, which is used for checking the speed of the turntables, is a circular disk containing a number of black and white bars.
      A tonearm is needed along with the turntable for playback systems. The pivoted tonearm is designed to ease the work that the cartridge stylus has to perform. The works include tracking the groove, supporting the tonearm at an appropriate height above the surface of the disk and moving it across the record until the end of the groove is reached.
      In order to reproduce signals recorded on the phonograph record, a transducer (or phono-pickups) has to be used. The phonograph pickup has to convert modulation of the record grooves into the electrical signals, and at the same time support the tonearm at the proper height all while moving the tonearm across the surface of the record. The pickups must ensure higher sensitivity, improved frequency response and reduced distortion.
      The two principles used in generation of electrical energy are electrodynamics and piezoelectric. Electrodynamics types are moving magnetic/coil tapes, or Iron tape, or induced magnetic tape. In piezoelectric types, the voltage produced by the crystal is proportional to the amplitude of the stylus displacement. The function of the playback stylus of the cartridge is to follow all deflections of the groove. It has rounded off edges that are polished for smooth tracking. They require a special type of amplification to reproduce the recorded sound.

2.5.2 Magnetic Recording
      The magnetic tape machine have wider acceptance for radio and music due to improved sound quality, longer playing times, the ability to edit by physically cutting and re-arranging the original recording without degradation. These tape media consists of electronic circuits to amplify and control the basic signal that is to be recorded. They consist of mechanical devices like drive motors and tape drive capstans. The recording system consists of precision mechanical devices to move the tape, sophisticated magnetic heads and tapes to record and reproduce, and intricate logic circuits to provide ease of operation. The audio engineer must choose an optimum tape recorder and tape that best suits the application.
      A magnetic tape recorder can be seen as an audio transformer, in which the input and output windings consist of the record and reproduce head and the magnetic core is in the form of a magnetic tape. Since the electrical to magnetic and magnetic to electrical transformations that occur during a record/reproduce cycle are controlled by tape speed. Any speed variations will distort the audio signal. The tape transport mechanism is the vital component that:
1. Drives the tape at a repeatable and constant speed.
2. Maintains a fixed mechanical alignment of the tape as it crosses the heads.
3. Provides constant pressure between the tape and head by either tensioning or pushing the tape.
4. Provide rewind, search & editing functions.
      To maintain constant tape velocity, it must maintain intimate contact with the rotating capstan. Active contact devices such as rubber pinch rollers push the tape against the capstan surface. The long term speed error is the drift and rapid changes instantaneous speed is the flutter. Similarly for proper recording and playback of a magnetic recording to occur, the tape must move over heads in a very precise path. The purpose of the tape guiding system is only to protect the tape and to overcome the slight reel to reel variations in tape such as twists and bends. It deals with three aspects of tape motion i.e. height, azimuth, and zenith.
      Height must be controlled so that the recorded tracks on the tape passes directly over the pickup areas of the head. The recorded signal on the tape must be parallel to the pickup gap in the reproduce head, any regular error of which is referred as azimuth error. In magnetic heads, the magnetic core and gap of a reproduce head obey the principle of reciprocity. For a head used in the reproduce mode, external flux at the gap produces a voltage across the head winding. If a voltage is applied to the head winding, a concentrated external flux field which is generated at the gap can be used to record a signal on apiece of moving tape.
      A magnetic tape can be erased and reused. It is accomplished by demagnetizing the tape with a very strong AC/ or static field with a bulk eraser/degausser. Modern magnetic tape consists of a powder of very small magnetic particles which has been glued to one surfae of a plastic substrate or a base film. The base film materials are paper, acetate, polyester(Mylar). To enhance the strength of the thin base films used for cassettes, the polyester is pre stretched. The ultimate performance of a tape recorder is determined by the tape drive, heads and electronics and importantly by the physical and magnetic characteristics of the magnetic particles of the tape.
      The weak signal generated by a magnetic tape must be carefully boosted without introducing additional noise to a higher level by the first stage of the playback amplifier. The reproduce head produces an output voltage that is related to the rate of change of the flux on the tape, dΦ/dt. The primary task of the amplifier that drives the record head is to convert the input audio signal voltage into a proportional amount of current flowing in the windings of the record head. The amplitude/frequency characteristics of the signal are modified by introducing an encoding device into the input signal path. A complementary decoder in the output signal path is used to restore the signal to the original form. The Dolby noise reduction systems modify amplitude of the signal to squeeze the dynamic range of the input signal.

2.5.3 Digital Recording
      Digital recording uses digital quantization which transforms both the amplitude and frequency characteristics of the signal. Encoding of an audio signal into digital form changes the frequency response and dynamic range requirements. The dynamic range of the audio level is reduced at the tradeoff for increased bandwidth. It permits modifications to the tape coating thickness, gap length, track width and recording material. As the errors in digital recording may result in loud pop in the audio, digital recorders encode error detection and correction information into the audio data stream so that errors can be removed during decoding. The recording of audio in digital form offers advantages such as;
1. The theoretical signal to noise ratio (SNR) is not degraded by multiple generations of re-recordings.
2. Audio quality is unaffected by amplitude and frequency anomalies in the recording process.
3. Print-through, AM noise and flutter are eliminated.

2.6 Video Production Technologies
2.6.1 Video Tape Recording
      Video recording is quite complex because of its high frequency and large bandwidth requirements. However, the video signal can be recorded on a magnetic tape for picture reproduction in a similar way as an audio tape is used for sound recording and playback. For recording up to the highest video frequency of 5MHz, the head gap size and the tape speed must be maintained at a suitable setting. As the head gap is in similar dimension, moving the tape at very high speed would result in excessive wear and tear which may result in mechanical instability. So, to increase the relative speed of tape and the head, the tape heads are mounted on the periphery of a drum which itself is rotating at certain speed.
      There are different methods used in magnetic recording in a video tape. They are: Longitudinal video tape recording (LVR), Transverse Video Recording(TVR) and Helical Scan Recording (HSR). LVR is a method in which video signals are reversed on at least some tracks along the length of the tape. In the TVR method, signals are recorded across the width of the tape in the form of parallel strips as the tape is drawn past a rotating recording head assembly. The quadruplex scan recording based on TVR was first introduced in 1965 by the Ampex Corporation of USA. But due to its complex head assembly and expensive electronics led to the development of helical scan recording.
      All modern day video tape recorders use the helical scan method of recording. In helical scan recording, the tape is wrapped in a half-helix round the drum, so that as it is moved past, it moves slantwise up the drum at an angle of inclination starting at the bottom and leaving at the top. There are two recording heads inside the drum fitted 180 degree spaced apart and opposite to each other. The heads trace out diagonal tracks across the tape, one track per head. While each recording head makes one full scan across the tape width, the information of one complete filed comprising of 312.5 lines is recorded. Thus two successive scans record one complete picture of 625 lines. The tape width is agreed to standardize at 1/2 inch but the available surface area is used in different ways by different manufacturers. For stabilization of the tape head speed, a control track is used which carries a longitudinal recording of synchronizing pulses recurring at a frequency of 25Hz. The sync pulses control servo mechanism which regulates the speed of tape transport motor and that of rotating video heads.

2.6.2 Video Recording Systems
      Different manufacturers of recording system such as Sony, Panasonic etc set up different standards of recorders, video formats and the type of tape used.

Video Disc:
It is a gramophone record with pictures on it. The optical vision disc uses reflected light from microscopic pits in the disc surface which correspond to video and audio signals. The pits are etched out and then coated with reflective aluminum when the disc is manufactured. In playback, a laser light beam is reflected from the bottom of the disc to provide the signal that corresponds to pit variations. The video signal is frequency modulated before recording process and the associated audio signal is recorded in two FM bands within the video FM signal. These two audio channels may be used for stereo sound or for bilingual recordings. To recover the information recorded on the disc, a small power laser produced in the disc player is made to scan the surface of the disc. Variations in the intensity of beam reflected from the surface of the disc are converted into equivalent electrical signals by a photodiode.

DVD Players:
       DVD has a much large data capacity than CDs and stand for Digital Versatile Disc, also called Digital Video Disc. The data on a DVD is also encoded in the form of small pits and bumps in the track of the disc. To read the bumps encoded on tiny tracks, a precise disc reading mechanism is needed. DVDs come in single side single layer or double side double layer. The DVD player has a laser assembly that shines the laser beam onto the surface of the disc. Based on the encoded video format, the DVD player decodes the MPEG-n encoded data and converts it into composite video signal. The player also decodes the audio stream and send it to a Dolby decoded where it is amplified before ending to the speakers.

2.6.3 Studio Television Cameras
       The typical studio will be equipped with three studio television cameras. Although many studios have more than three cameras, it would be unusual to find fewer in typical setup. Studio television cameras are non-format specific. In other words, the video signal that is generated (analog or digital) can be recorded onto almost any video format. A camcorder, on the other hand, is a format-specific device. All modern studios will be equipped with standard-definition digital cameras (SDTV), and a few have high-definition cameras (HDTV). The cameras will be numbered Camera One, Two and Three.
       The production personnel who operate the cameras are called Camera Operators. A growing number of studios have robotic cameras that can be controlled by joystick and a computer program. In the case of robotic cameras, only few production crew members are needed to drive them.

Enter your email address to subscribe news feed:

Contact Me

Rijan KC
Kalika Muncipality-5, Padampur
Chitwan, Nepal
Cell: +977 9845080226, 9855080226
E-mail: mail@rijankc.com.np
Facebook: /kc.rijan

Recently Uploaded Matter