Linear Systems Most DSP techniques are based on a divide-and-conquer strategy called superposition. The signal being processed is broken into simple components, each component is processed individually, and the results reunited. This approach has the tremendous power of breaking a single complicated problem into many easy ones. Superposition can only be used with linear systems, a term meaning that certain mathematical rules apply. Fortunately, most of the applications encountered in science and engineering fall into this category. This chapter presents the foundation of DSP: what it means for a system to be linear, various ways for breaking signals into simpler components, and how superposition provides a variety of signal processing techniques. Signals and Systems A signal is a description of how one parameter varies with another parameter. For instance, voltage changing over time in an electronic circuit, or brightness varying with distance in an image. A system is any process that produces an output signal in response to an input signal. This is illustrated by the block diagram in Fig. 5-1. Continuous systems input and output continuous signals, such as in analog electronics. Discrete systems input and output discrete signals, such as computer programs that manipulate the values stored in arrays. Several rules are used for naming signals. These aren't always followed in DSP, but they are very common and you should memorize them. The mathematics is difficult enough without a clear notation. First, continuous signals use parentheses, such as: x(t) and y(t) , while discrete signals use brackets, as in: x[n] and y[n] . Second, signals use lower case letters. Upper case letters are reserved for the frequency domain, discussed in later chapters. Third, the name given to a signal is usually descriptive of the parameters it represents. For example, a voltage depending on time might be called: v(t) , or a stock market price measured each day could be: p[d] . Signals and systems are frequently discussed without knowing the exact parameters being represented. This is the same as using x and y in algebra, without assigning a physical meaning to the variables. This brings in a fourth rule for naming signals. If a more descriptive name is not available, the input signal to a discrete system is usually called: x[n] , and the output signal: y[n] . For continuous systems, the signals: x(t) and y(t) are used. There are many reasons for wanting to understand a system. For example, you may want to design a system to remove noise in an electrocardiogram, sharpen an out-of-focus image, or remove echoes in an audio recording. In other cases, the system might have a distortion or interfering effect that you need to characterize or measure. For instance, when you speak into a telephone, you expect the other person to hear something that resembles your voice. Unfortunately, the input signal to a transmission line is seldom identical to the output signal. If you understand how the transmission line (the system) is changing the signal, maybe you can compensate for its effect. In still other cases, the system may represent some physical process that you want to study or analyze. Radar and sonar are good examples of this. These methods operate by comparing the transmitted and reflected signals to find the characteristics of a remote object. In terms of system theory, the problem is to find the system that changes the transmitted signal into the received signal. At first glance, it may seem an overwhelming task to understand all of the possible systems in the world. Fortunately, most useful systems fall into a category called linear systems. This fact is extremely important. Without the linear system concept, we would be forced to examine the individual characteristics of many unrelated systems. With this approach, we can focus on the traits of the linear system category as a whole. Our first task is to identify what properties make a system linear, and how they fit into the everyday notion of electronics, software, and other signal processing systems. Requirements for Linearity A system is called linear if it has two mathematical properties: homogeneity (hÇma-gen-~-ity) and additivity. If you can show that a system has both properties, then you have proven that the system is linear. Likewise, if you can show that a system doesn't have one or both properties, you have proven that it isn't linear. A third property, shift invariance, is not a strict requirement for linearity, but it is a mandatory property for most DSP techniques. When you see the term linear system used in DSP, you should assume it includes shift invariance unless you have reason to believe otherwise. These three properties form the mathematics of how linear system theory is defined and used. Later in this chapter we will look at more intuitive ways of understanding linearity. For now, let's go through these formal mathematical properties. As illustrated in Fig. 5-2, homogeneity means that a change in the input signal's amplitude results in a corresponding change in the output signal's amplitude. In mathematical terms, if an input signal of x[n] results in an output signal of y[n] , an input of k x[n] results in an output of k y[n], for any input signal and constant, k. A simple resistor provides a good example of both homogenous and nonhomogeneous systems. If the input to the system is the voltage across the resistor, v(t) , and the output from the system is the current through the resistor, i(t) , the system is homogeneous. Ohm's law guarantees this; if the voltage is increased or decreased, there will be a corresponding increase or decrease in the current. Now, consider another system where the input signal is the voltage across the resistor, v(t) , but the output signal is the power being dissipated in the resistor, p(t) . Since power is proportional to the square of the voltage, if the input signal is increased by a factor of two, the output signal is increase by a factor of four. This system is not homogeneous and therefore cannot be linear. The property of additivity is illustrated in Fig. 5-3. Consider a system where an input of x produces an output of . Further suppose that a different 1[n] y1[n] input, x , produces another output, . The system is said to be additive, 2[n] y2[n] if an input of x results in an output of , for all possible 1[n] % x2[n] y1[n] % y2[n] input signals. In words, signals added at the input produce signals that are added at the output. The important point is that added signals pass through the system without interacting. As an example, think about a telephone conversation with your Aunt Edna and Uncle Bernie. Aunt Edna begins a rather lengthy story about how well her radishes are doing this year. In the background, Uncle Bernie is yelling at the dog for having an accident in his favorite chair. The two voice signals are added and electronically transmitted through the telephone network. Since this system is additive, the sound you hear is the sum of the two voices as they would sound if transmitted individually. You hear Edna and Bernie, not the creature, Ednabernie. A good example of a nonadditive circuit is the mixer stage in a radio transmitter. Two signals are present: an audio signal that contains the voice or music, and a carrier wave that can propagate through space when applied to an antenna. The two signals are added and applied to a nonlinearity, such as a pn junction diode. This results in the signals merging to form a third signal, a modulated radio wave capable of carrying the information over great distances. As shown in Fig. 5-4, shift invariance means that a shift in the input signal will result in nothing more than an identical shift in the output signal. In more formal terms, if an input signal of x[n] results in an output of y [n], an input signal of x[n% s] results in an output of y[n% s] , for any input signal and any constant, s. Pay particular notice to how the mathematics of this shift is written, it will be used in upcoming chapters. By adding a constant, s, to the independent variable, n, the waveform can be advanced or retarded in the horizontal direction. For example, when s’ 2 , the signal is shifted left by two samples; when s’ &2 , the signal is shifted right by two samples. Shift invariance is important because it means the characteristics of the system do not change with time (or whatever the independent variable happens to be). If a blip in the input causes a blop in the output, you can be assured that another blip will cause an identical blop. Most of the systems you encounter will be shift invariant. This is fortunate, because it is difficult to deal with systems that change their characteristics while in operation. For example, imagine that you have designed a digital filter to compensate for the degrading effects of a telephone transmission line. Your filter makes the voices sound more natural and easier to understand. Much to your surprise, along comes winter and you find the characteristics of the telephone line have changed with temperature. Your compensation filter is now mismatched and doesn't work especially well. This situation may require a more sophisticated algorithm that can adapt to changing conditions. Why do homogeneity and additivity play a critical role in linearity, while shift invariance is something on the side? This is because linearity is a very broad concept, encompassing much more than just signals and systems. For example, consider a farmer selling oranges for $2 per crate and apples for $5 per crate. If the farmer sells only oranges, he will receive $20 for 10 crates, and $40 for 20 crates, making the exchange homogenous. If he sells 20 crates of oranges and 10 crates of apples, the farmer will receive: 20×$2 % 10×$5 ’ $90 . This is the same amount as if the two had been sold individually, making the transaction additive. Being both homogenous and additive, this sale of goods is a linear process. However, since there are no signals involved, this is not a system, and shift invariance has no meaning. Shift invariance can be thought of as an additional aspect of linearity needed when signals and systems are involved. Static Linearity and Sinusoidal Fidelity Homogeneity, additivity, and shift invariance are important because they provide the mathematical basis for defining linear systems. Unfortunately, these properties alone don't provide most scientists and engineers with an intuitive feeling of what linear systems are about. The properties of static linearity and sinusoidal fidelity are often of help here. These are not especially important from a mathematical standpoint, but relate to how humans think about and understand linear systems. You should pay special attention to this section. Static linearity defines how a linear system reacts when the signals aren't changing, i.e., when they are DC or static. The static response of a linear system is very simple: the output is the input multiplied by a constant. That is, a graph of the possible input values plotted against the corresponding output values is a straight line that passes through the origin. This is shown in Fig. 5-5 for two common linear systems: Ohm's law for resistors, and Hooke's law for springs. For comparison, Fig. 5-6 shows the static relationship for two nonlinear systems: a pn junction diode, and the magnetic properties of iron. Here is the important part: the output signal obtained by this method is identical to the one produced by directly passing the input signal through the system. This is a very powerful idea. Instead of trying to understanding how complicated signals are changed by a system, all we need to know is how simple signals are modified. In the jargon of signal processing, the input and output signals are viewed as a superposition (sum) of simpler waveforms. This is the basis of nearly all signal processing techniques. As a simple example of how superposition is used, multiply the number 2041 by the number 4, in your head. How did you do it? You might have imagined 2041 match sticks floating in your mind, quadrupled the mental image, and started counting. Much more likely, you used superposition to simplify the problem. The number 2041 can be decomposed into: 2000 % 40 % 1 . Each of these components can be multiplied by 4 and then synthesized to find the final answer, i.e., 8000 % 160 % 4 ’ 8164 . Common Decompositions Keep in mind that the goal of this method is to replace a complicated problem with several easy ones. If the decomposition doesn't simplify the situation in some way, then nothing has been gained. There are two main ways to decompose signals in signal processing: impulse decomposition and Fourier decomposition. They are described in detail in the next several chapters. In addition, several minor decompositions are occasionally used. Here are brief descriptions of the two major decompositions, along with three of the minor ones. Impulse Decomposition As shown in Fig. 5-12, impulse decomposition breaks an N samples signal into N component signals, each containing N samples. Each of the component signals contains one point from the original signal, with the remainder of the values being zero. A single nonzero point in a string of zeros is called an impulse. Impulse decomposition is important because it allows signals to be examined one sample at a time. Similarly, systems are characterized by how they respond to impulses. By knowing how a system responds to an impulse, the system's output can be calculated for any given input. This approach is called convolution, and is the topic of the next two chapters. Step Decomposition Step decomposition, shown in Fig. 5-13, also breaks an N sample signal into N component signals, each composed of N samples. Each component signal is a step, that is, the first samples have a value of zero, while the last samples are some constant value. Consider the decomposition of an N point signal, x[n] , into the components: x . The component 0[n], x1[n], x2[n], þxN&1[n] k th signal, x , is composed of zeros for points 0 through , while the k[n] k&1 remaining points have a value of: x[k] & x[k&1]. For example, the 5 th component signal, x , is composed of zeros for points 0 through 4, while 5[n] the remaining samples have a value of: x[5] & x[4] (the difference between ATRIAL FIBRILLATION The atria are the chambers that connect to the ventricles of the heart. In the figure below they are highlighted in blue and are labeled. The two large chambers that they are connected to are called the ventricles. The heart has an internal electrical system that controls the rhythm and beating of the heart. During each beat a signal is sent from the top to the bottom of the heart. This causes the heart to contract and pump blood. In a healthy adult at rest the signal to begin a new heartbeat occurs 60 to 100 times per minute During atrial fibrillation there are disorganized electrical signals to the atria of the heart. The heart normally has these signals to cause a rhythmic heartbeat. But with atrial fibrillation the heart beats become obscured. It may beat up to 100-180 times per minute. The electrical signals do not travel the normal pathways, but may spread throughout the atria in a rapid and random fashion. This can cause the atria to beat rapidly and irregular. This type of beating of the atria is called fibrillation. The signals can also cause the beating of the ventricles to be off as well. During this heart problem the atria and ventricles no longer beat in a coordinated way. Since the beating of these chambers are off, the amount of blood being pumped through each chamber is irregular as well. The ventricles will receive random amounts of blood depending how much the atria has pumped to them. It is possible that blood can pool in the atria. For my animation I plan to have the ventricles beat faster, as well as having the atria quivering and beating rapidly. I also plan to animate the blood circulating in an irregular pattern. ATRIAL FLUTTER Similarly to atrial fibrillation, atrial flutter is a tachyarrhythmia. Meaning that the heart beats faster than normal. Unlike atrial fibrillation, during atrial flutter the heart beats rhythmically, where in atrial fibrillation the beating is more randomly executed. The fast beating atrial muscles causes the beating of the ventricles to be out of sync with the atrial beats. Since the atrial muscles are not beating correctly blood can pool up in the atrium and stay longer than usual. The atrial muscle can not pump blood forward to the ventricle. For this animation I plan to incorporate much of the same idea as the atrial fibrillation animation, but make rapid beating become regular and more rhythmic. Cardiomyopathy Animation By: Mohamad Haidar Cardiomyopathy is a heart muscle disease. The heart enlarges and does not effectively pump blood. This disease usually progresses to the point where patients develop life-threatening heart failure. There are two main types of cardiomyopathy and I will attempt to animated both of them. (Dilated and Hypertrophic) The first type is Dilated cardiomyopathy: the heart muscle is damaged and becomes weak which leads the heart chambers to enlarge. The heart stretches as it tries to compensate for it's weakened pumping ability. I will enlarge the Left ventricle in our heart model and create a thinner outside ventricle wall. The left ventricle will expand more than the right ventricle and will not contract fully. Dilated Cardiomyopathy: Blood Flow: If the heart muscle is weak, it is unable to pump out the same portion of blood that it could at normal strength. Rather than simply "accepting" the limitations of decreased pumping ability, the heart undergoes changes to compensate. Blood flows more slowly through an enlarged heart, so blood clots may form. A clot can break free, circulates in the bloodstream and block a small blood vessel. I will slow down the blood flow and not fully empty out the enlarged ventricle after the contraction. This is what nurse Judy advised me to do. This will leave some blood to show that an enlarged ventricle is too weak to keep the blood flowing normally. I will also create a blood clot that will flow in the bloodstream. The second type of cardiomyopathy is Hypertrophic cardiomyopathy: Heart muscle fibers enlarge abnormally. The heart wall thickens, leaving less space for blood in the chambers. The disease is characterized by a disorderly growth of heart muscle fibers, causing the heart chambers to become thick-walled and bulky. All the chambers are affected, but the thickening is generally most striking in the walls of the left ventricle. Most commonly, one of the walls, the septum, which separates the right and left ventricles, is asymmetrically enlarged. In one form of the disease, the wall (septum) between the two ventricles becomes enlarged and obstructs the blood flow from the left ventricle. The syndrome is known as hypertrophic obstructive cardiomyopathy. I will have to create a thicker heart wall while making the ventricles smaller. The wall will be so thick it will not leave a lot of space for the ventricles which will also have to be modified to look small and abnormal. The septum wall will be asymmetrically enlarged. Hypertrophic Cardiomyopathy: Blood flow: Hypertrophic cardiomyopathy can impair blood flow both into and out of the heart. Since the heart does not relax correctly between beats, less blood fills the chamber and is pumped from the heart. Besides obstructing blood flow, the thickened wall sometimes distorts one leaflet of the mitral valve, causing it to leak. The distorted left ventricle contracts, but the supply of blood to the brain and other vital organs may be inadequate because blood is trapped within the heart during contractions. This overgrowth creates a bulge that protrudes into the ventricular chamber and impedes the flow of blood from the heart to the aorta and the rest of the body. The problem is not that the heart muscle is weak but that the overgrown heart muscle impedes the flow of blood through and out of the heart. I will make less blood fill the chamber and less blood will be pumped out. Some blood will stay in the heart after in contracts to show that it is being obstructed by the thickened wall. I might also create a leak in the mitral valve but I have to discuss this with nurse Judy.