CSIR GOLDEN JUBILEE SERIES
A MATTER OF CHANCE
K.D.PAVATE
A MATTER OF CHANCE
K.D. PAVATE
National Institute of Scie...
54 downloads
963 Views
11MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
CSIR GOLDEN JUBILEE SERIES
A MATTER OF CHANCE
K.D.PAVATE
A MATTER OF CHANCE
K.D. PAVATE
National Institute of Science Communication Dr. K.S. Krishnan Marg New Delhi 110 012 INDIA
A Matter of Chance K.D. Pavate © National Institute of Science Communication (CSIR) First Edition: July 1996 Reprinted: August 2001 ISBN: 81-7236-133-5
CSIR Golden Jubilee Series Publication No. 17 Series Editor
Dr Bal Phondke
Volume Editors
S.K. Nag and Parvinder Chawla
Cover Design
Pradip Banerjee
Illustrations
Pradip Banerjee, Neeru Sharma and Sushila Vohra
Production
Supriya Gupta, Rohini Kaul, S. Bhushan, Gopal Porel, Sudhir C Mamgain, B.S. Rana, Lalit Mohan and Sahandra
Designed, Printed and Published by National Institute of Science Communication (CSIR) Dr. K.S. Krishnan Marg, New Delhi 110 012
For sale in India only
Foreword The Council of Scientific & Industrial Research (CSIR), established in 1942, is committed to the advancement of scientific knowledge, and economic and industrial development of the country. Over the years CSIR has created a base for scientific capability and excellence spanning a wide spectrum of areas enabling it to carry out research and development as well as provide national standards, testing and certification facilities. It has also been training researchers, popularizing science and helping in the inculcation of scientific temper in the country. The CSIR today is a well knit and action oriented network of 41 laboratories spread throughout the country with activities ranging from molecular biology to mining, medicinal plants to mechanical engineering, mathematical modelling to metrology, chemicals to coal and so on. While discharging its mandate, CSIR has not lost sight of the necessity to remain at the cutting edge of science in order to be in a position to acquire and generate expertise in frontier areas of technology. CSIR's contributions to high-tech and emerging areas of science and technology are recognised among others for precocious flowering of tissue cultured bamboo, DNA finger-printing, development of non-noble metal zeolite catalysts, mining of polymetallic nodules from the Indian Ocean bed, building an all-composite light research aircraft, high temperature superconductivity, to mention only a few. Being acutely aware that the pace of scientific and technological development cannot be maintained without a steady influx of bright young scientists, CSIR has undertaken a vigorous programme of human resource development which includes, inter alia, collaborative efforts with the University Grants Commission aimed at nurturing the budding careers of fresh science and technology graduates. However, all these would not yield the desired results in the absence of an atmosphere appreciative of advances in science
and technology. If the people at large remain in awe of science and consider it as something which is far removed from their realms, scientific culture cannot take root. CSIR has been alive to this problem and has been active in taking science to the people, particularly through the print medium. It has an active programme aimed at popularization of science, its concepts, achievements and utility, by bringing it to the doorsteps of the masses through both print and electronic media. This is expected to serve a dual purpose. First, it would create awareness and interest among the intelligent layman and, secondly, it would help youngsters at the point of choosing an academic career in getting a broad-based knowledge about science in general and its frontier areas in particular. Such familiarity would not only kindle in them deep and abiding interest in matters scientific but would also be instrumental in helping them to choose the scientific or technological education that is best suited to them according to their own interests and aptitudes. There would be no groping in the dark for them. However, this is one field where enough is never enough. This was the driving consideration when it was decided to bring out in this 50th anniversary year of CSIR a series of p r o f u s e l y illustrated and specially written popular monographs on a judicious mix of scientific and technological subjects varying from the outer space to the inner space. Some of the important subjects covered are astronomy, meteorology, oceanography, new materials, immunology and biotechnology. It is hoped that this series of monographs would be able to whet the varied appetites of a wide cross-section of the target readership and spur them on to gathering further knowledge on the subjects of their choice and liking. An exciting sojourn through the wonderland of science, we hope, awaits the reader. We can only wish him Bon voyage and say, happy hunting.
Preface There is an interesting story from the life of Bernard Shaw, the well known author. Isabelle Duncan a famous ballerina of those days suggested to Shaw that they ought to get married, so that their daughter would have her looks and his brains. To this the witty author replied, "What if we have a son with my looks and your brains?" This is probably just a story but the point is well taken. It is applicable to any couple who have children. With so many ancestors to provide the genes, it would be difficult to predict what characteristics an offspring would inherit and from whom? The theory of probability, or the 'Laws of chance', has its origin in gambling. Some well known gamblers (among whom were a few mathematicians) wanted to know in advance the manner in which a game was likely to proceed, if played for a sufficiently long enough time. Today, the subject has evolved into an interesting branch of mathematics which enables us to arrive at good decisions even though the outcomes appear to be haphazard to begin with. It introduces some order into the randomness which we observe all around us. Probability is a word which is used constantly in mathematics, economics, social sciences, biological sciences and in a multitude of situations in everyday life. It is not surprising that the word probability has acquired various shades of meaning which are not clearly distinguished from one another. We talk of the probability of a chance meeting with an old friend at a railway station, of the money market reaching a crisis, of a certain politician being reelected and so on. Subjectively speaking, probability represents the "degree of belief". However, in the realm of science and technology it should have a clear and a definite meaning. The book is not a text book but attempts to explain, in as simple a manner as possible, the principles and some interesting aspects of probability. This brief introduction, it is hoped, will enthuse many youngsters to study the subject at a higher academic level.
Acknowledgement It was at one of those "chance" meetings with Dr G. P. Phondke that he first suggested that I write a book on the "chancy" aspects of life. In other words, he wanted me to write a popular science book on probability and random phenomena. Since I had always been interested in this aspect of mathematics, it did not require further persuasion on the part of Dr Phondke for me to get started. It was exciting to brush up my knowledge on the subject and to prepare the manuscript. I am grateful to Dr Phondke for his suggestion in the first instance. After the manuscript was ready it was passed on to Shri S.K. Nag and Ms Parvinder Chawla to edit it. They too were very keen on the subject and they put in many hours of hard work to transform my original material into something understandable by a 17 year old student. I would like to wholeheartedly thank them and Shri Pradeep Banerjee (art section of PID) for their unstinted cooperation and efforts. Finally, I must thank my entire family for their unstinted support and for goading me along to complete the assignment as quickly as possible.
Dedicated to my mother Smt. Girijadevi Pavate
Contents Heads or Tails?
...
1
Measures of Uncertainty
... 11
Models for Uncertainty
... 24
With Strings Attached
... 35
Distribution of Chance
... 44
Markov's Frogs
... 63
Barking up the Tree
... 71
The Uncertain Engineer
... 77
Chance and Management
... 84
Uncertainties in Physics
... 94
Bio Probability
...104
Chancy Games
...111
Glossary
...121
n -a wet rainy day most children remain indoor and love to play games such as snakes and ladder, ludo and monopoly. These are simple games where coloured coins are moved around on a board depending on the outcome of rolling a die (commonly known as 'dice', which is actually the plural of die). It is a cube with six sides. Each side has dots to represent numbers from 1 to 6. If number 4 appears on its uppermost side then the player moves his coin forward by four places.
O
Heads or Tails?
In the game of snakes and ladders, the player may find his coin moving up ladders or sliding down the snakes depending only on the outcome of tossing the die. Since the snakes as well as the ladders happen to be of different lengths and located at different parts of the board, it is usually difficult to anticipate as to which player will be the first to reach the goal and so win the game. One from amongst the participating players is going to win-sooner or later. We often say that the winner was lucky or fortunate. Some even attribute the mysterious "Dame Luck" as being on his side! There are other games which begin by tossing a coin. The cap-
2
A MATTER OF CHANCE
tain of a hockey or a football team who calls out "heads" or "tails" correctly has a choice. He can decide on which side of the field his team would play from. At half-time the two teams change sides, so that any advantage or disadvantage evens out. In the game of cricket, the captain who wins the toss decides whether to bat first or to field. These, unlike ludo and monopoly are not games of chance. Because in these games the individual playing skills of the team members also matter. Nevertheless, the eventual outcome of the game remains uncertain. Next consider a situation where you have to cross a road in order to reach your school. You look first to your left and then to your right before crossing the road to avoid accident. In spite of these precautions there could still be a scooter which comes along faster than you have anticipated and gives you a fright. Such precautions make sure that the chances of crossing the road safely are high. However, in spite
HEADS OR TAILS?
3
of taking all precautions, we take chances almost everyday to get along in life! "I am going to take a chance and ask my bank manager for a loan". We are faced with many such situations where the outcome is uncertain and yet we have to decide on the next step to be taken. Uncertainty is always present in some form or the other in most situations. What will the weather be like? Do I have to take an umbrella along with me? How long will it take for me to reach the station? Will I be able to catch the 10 o'clock train? As a young student you have to decide on a career very soon. Your decision may hopefully prove to be successful. However, you will know the result only after you have lived through your life! It is not the same as tossing a coin. There
4
A MATTER OF CHANCE
are similar situations in business where those involved in buying and selling shares would like to know in advance whether today is the right day to invest in particular shares or will the prices be better next week? In all these examples, the outcome is a "chance phenomenon". The word chance is a collective one as it embraces all the unknown causes. One can rarely predict with certainty as to what the outcome or the result will be. Yet, we are all accustomed to the effects of chance in our lives and often refer to having g'cwiH.'OfJ^ad luck, or to fate — the fortune teller's way of dealing wifK^jr^ability!
HEADS OR TAILS?
5
In the course of our everyday conversation we often make use of expression such as "He is likely to win the elections", "I will probably be included in the school team", "It is almost certain my mother-in-law will stay with us". The words in italics are not intended to have precise meanings. Nevertheless, they are acceptable in our discussions because in the overall context accuracy is not important. When the weatherman says "It is going to be a good monsoon this year", he does not mean it will rain 20 cm in Delhi in the month of July. How nice it would be if he could quantify the adjective "good". But no meteorologist could ever do that! Also, we would like to have a proper index which tells us what our chances of crossing the road safely are, or for that matter of winning a game of snakes and ladder. This desire to assign some sort of an index to our chances is not of recent origin. In fact, it has been bothering scientists and mathematician for many centuries. They have given it a name, probability, which is a measure of certainty of an outcome. By this we mean that the chancy phenomenon has many possible outcomes and probability tells us what are the chances of a particular outcome. One dictionary meaning of probability is "the extent to which an event is likely to occur". Now, how would we expect our measure of chance to behave? If we wanted to cross the road early in the morning at 6 AM then this index would indicate that our chances of doing so without getting into trouble are quite high indeed. On the other hand, at about 9 AM the traffic on the road would have increased to an alarming level and so also our chances of running into trouble. Therefore, the probability of crossing the road safely at 6 AM are much higher than at 9 AM. Here the index or the probability keeps changing with time. We would like to replace all these verbal descriptions by numbers, and if this were to be done then it is possible to apply a wide range of mathematical methods to solve problems related to chance.
6
Traffic
A MATTER OF CHANCE
4 AM
8 AM Time
12 Noon >
rrobabiuty or crossing a road sarely depends on the trarric
Mathematicians have found ways of assigning numbers to probabilities. They have done so on a proper scientific basis after studying various kinds of chancy phenomena, including games with dice and cards. The lowest value assignable to probability should represent a situation where the chances of a success are practically nil. The highest value should indicate an absolute success. As such scientists have found it convenient for the probability scale to vary from 0 to 1. In business fields, investing your money in someone else's project is like taking a chance. Whether the project will be successful or not is at this point of time uncertain. You will be taking a risk in investing your hard earned money on this particular project. Similarly, insurance companies take a risk in insuring you against accidents or premature death. The insurance companies are not in this business for charity. They are reasonably sure that not all their policy holders die before all the premi-
HEADS OR TAILS? scale
Absolute Impossibility
Absolute certainty
Probability
7
F r o m i m p o s s i b l e to certainty
ums have been paid. While one cannot predict or forecast the death of any particular individual, one knows that in dealing with large numbers the probability of a certain percentage of policy holders who will die is known from records of deaths. Insurance premium is always calculated using probability theory. Probability has become a very useful subject of study for scientists, medical researchers, sociologists, business and fi-
8
A MATTER OF CHANCE
PROBABILITY THEORY
nancial experts, administrators and quality control engineers. Potential managers of large concerns study probability and are usually quite proficient in applying its principles to practical business problems. In science we come across phenomena which follow certain patterns without fail. A stone which drops from a cliff follows Newton's laws of motion. The time it takes to reach the bottom can be accurately anticipated. This is an example of a deterministic phenomenon. On the other hand, the number of alpha particles emitted by a radioactive material shows variations. The actual number of emissions per unit time
HEADS OR TAILS?
9
keeps on changing and this variation can be explained only on the basis of probability laws. Many of the outcomes in the fields of economics and psychology are variable. Studies on subjects such as earnings by men, women and children, occurrences of strikes in industries, individual suicides, traffic accidents, etc. all suggest that there are very few deterministic laws to be found in social sciences. Uncertainty is built into the very nature of decision making. Decisions relate to choices between alternatives and invariably there is uncertainty regarding which one would be a better choice. Since we are unsure we ask ourselves "how probable or how likely" is it that a particular event could occur. Weighing and assigning values to uncertainties become important when it comes to taking good decisions. These decisions could be at various levels: personal, national
10
A MATTER OF CHANCE
and even international. Probabilities of events or outcomes should be known if one is planning for success. The theory of probability provides a scientific method for evaluating what the chances of success would be. The subject has become quite complex and makes use of much advanced mathematics. This unfortunately frightens many students away from the subject. Nevertheless, the importance of probability cannot be overemphasized. Measurement of risk involved in taking a new medicine (there could be unexpected side effects), or in smoking cigarettes (since it harms the lungs and shortens one's lifespan), or while travelling by an overnight bus (chances of accidents are greater at night) is all done on the basis of probability. Even allocation of prizes in casinos is determined by considerations of probability. In its simplest term, probability provides a quantitative measure of the degree of certainty about occurrences and situations.
n our everyday life we come across instances of events which have more than one outcome. These we call "random phenomena". The outcome of a random phenomenon is given a special name — "random variable". They come in two main categories: discrete and continuous. The former category consists of random variables that assume distinct-values. An example is the outcome of tossing a die or picking a card from a pack. An example of a continuous random variable is the time we have to wait in a queue in order to be served. We have already looked at a few examples which have discrete outcomes and seen that for each outcome one can associate a numerical index. This is known as the probability of that particular outcome.
I
Measures of Uncertainty MM
d
V
/
ft*
u
\ A
<
\
\ >
»
f
•i
V \ 71
5J
/
J v
>\
\ M
t7 0 1
>
4
50
Both the definition as well as the measure of probability are closely related to one another. Measuring or assigning values to probability is quite different from the way we count or measure things in everyday life. Probability is a concept, a sort of a mental symbol, which we have created to describe phenomena with multiple outcomes, that is random phenomena. It is not in
12
A MATTER OF CHANCE
the same category as measuring the height of a person or determining his or her weight. Length and weight of an object is usually measured with the help of instruments. On the other hand, probability has to be estimated with the help of arithmetical manipulations. The most elementary example of measurement we can think of is, say, counting the number of articles in a basket. We say there are 25 balls in the basket after counting them one by one. We can further distinguish between white and coloured balls. The two groups can be counted separately and together they should add up to 25. Another example of measurement is weighing articles with the help of a traditional balance. This method, where an object is weighed by comparing it with a set of known reference weights, is an example of "direct measurement". The reference weights are usually certified by some national authority.
MEASURES OF UNCERTAINTY
13
Measuring lengths with the assistance of a ruler or a scale is another example of direct measurement. The physical education department of a school records the heights and weights of students on a regular basis each year. Their measured heights will vary as there are short as well as tall students in every class. The teacher adds up all the individual heights and divides the sum by the number of students. The result gives the average or "mean" height of the class. Another calculation is to determine the "standard deviation" or SD for these measurements. It is a good index of how widely the individual heights deviate from the mean. Teachers also find the average and the SD of the performance marks obtained by students in their examinations.
14
A MATTER OF CHANCE
The process of measuring length is so convenient that many other measurements are commonly converted into lengths, so that they can be measured by reference to a scale. A mercury thermometer for measuring temperature is one such instrument. Increase in temperature Pierre de Laplace provided logic and rea- c a u s e s m e r c u r y to e x soning to help understand probability p £ m d i n s i d e n a r r o w
capillary tube. The position of the tip of the mercury column can then be read off against a calibrated scale. Such a thermometer is an example of an "indirect" measurement of a physical quantity. It was Pierre Simon Marquis de LAPLACE (1749-1827), a French mathematician, who helped us to understand probability on the basis of proper reasoning or logic. He also showed a way of obtaining a measure of probability. First we need to be clear about a few terms which are in common everyday use but which have specific meanings when we talk about random phenomena. • A "Trial" is an action which results in one of several possible outcomes. For example, drawing a card from a pack is a trial. • An "Experiment" consists of a number of trials. For example, an experiment is when we draw 13 cards from a pack. • A n " E v e n t " is a set of outcomes which have some characteristics in common. An event happens when we have drawn 13 cards at random from a pack and they are all black cards, or all hearts, or have numbers less than six, and so on.
MEASURES OF UNCERTAINTY
15
Broadly speaking Laplace suggested that the following steps should be taken first before probability can be assigned to an outcome. • List all the possible outcomes of a trial.
"
_
_____
• List all possible events which may occur as a consequence of conducting an experiment.
I
:
• Make an assessment of the likelihood of these events.
16
A MATTER OF CHANCE
Suppose that a random experiment is to be performed. Let n represent the number of possible outcomes of such an experiment. We make an assessment and decide that these outcomes are all equally likely. Laplace suggested that the probability of any one of these outcomes would be 1 / n . Now let 'r' of these outcomes have some common characteristics. An event 'E' is defined as an outcome which has this particular characteristic. Common sense tells us that the probability of an outcome having this characteristic is P(E) - r / n . For example, if the experiment consists of tossing a six sided die once, then the list of possible outcomes is 1,2,3,4,5 and 6. An event could be "the outcome is an even number" or "the outcome is a number less than four". When we toss a
MEASURES OF UNCERTAINTY
17
die the probability of drawing the number 4 is, according to Laplace, equal to 1/6. An event corresponding to drawing an odd number 1,3 or 5 is equal to 3 / 6 or 1/2. This is known as the a priori definition of probability. The logic behind this definition is something like this: A die has six sides. When it is tossed it comes down to rest with any one of the six numbers facing upwards. So, all the six sides (or outcomes) are equiprobable and each has a probability of 1/6. It must be remembered that probability is the ratio of two numbers. In this example, the two numbers are 1 and 6. Probability is a dimensionless quantity which means there are no units such as metres or kilograms attached to it. The phrase a priori is a Latin expression and it means "without investigating experimentally". In short it is based entirely on our common sense examination of the problem, though of course Laplace gets the credit for expressing it so neatly with the help of mathematics. Next, we consider tossing a coin. When it lands it has either the head or the tail facing upwards. Since there are only these two possibilities for the eventual state of the coin, we say that it has one chance in two for a trial resulting in a head. The a priori probability of obtaining a head in a single toss is one possibility out of two. Or the probability of a head is P(Head) = 1 / 2 = 0.5. Similarly, P(Tail) is also 0.5. However, experiments should be performed to find out the validity of the a priori probability. A coin, for example, could be tossed up a 100 times. The result might be that heads turn up 51 times and tails 49 times. The "relative frequency" of head is 0.51 and that for tail 0.49. One might say that a 100 times is not good enough. So, why not 2000 times? The results might this time indicate relative frequencies of 0.495 for heads and 0.505 for tails. These are not quite the same as in the earlier experiment of a 100 trials, yet both of them are close to the a priori probability of 0.5. This relative frequency approach to probability appears to be more natural. We have an experiment which has k possible
18
A MATTER OF CHANCE
outcomes. These may or may not be equally likely. If the experiment is repeated n times, then it is possible that outcome 1 occurs rl times, outcome 2 occurs r2 times and the last outcome k occurs rk times. The sum of all these outcomes should add up to n which is the total number of trials. n=rl+r2+r3+ rk. The ratios r l / n , r2/n, ...rk/n are the relative frequencies of these k outcomes. If we continue these trials a large number of times, then the relative frequencies stabilize and these are known as the relative frequency measure of probability. Posters, notices and charts are pinned to wooden boards by metal tacks. They look like shallow umbrellas. The pivotal
Probability = 1/3
Probability = 2/3
Tossing a tack: O n b a c k or on side '
MEASURES OF UNCERTAINTY
19
point of a tack is sharp enough to go through the paper and pierce the wood behind. The flat portion can be pressed with a thumb. Now, if you toss a.tack, so that it falls on a flat table, it can do so with either the pin facing up in the air or on its side with the pin tip touching the table. If you were to toss this tack a large number of times you will find that it lands on its back with a probability 2 / 3 on its side with a probability 1/3.
Relative frequency of occurrence of six
Someone performed a similar experiment on a six-sided die. As the number of throws went on increasing the relative frequencies hovered around the value 1/6. Intuitively, one would expect that the larger the number of throws the closer the relative frequency approaches the a priori probability of the outcome. One would like to perform this experiment a million times but who has the time or patience! The point is one has to stop repeating the experiNumber of throws ment at some stage and base one's concluTypical pattern of the result o f t h r o w i n g sions on the results ob a die tained till then and therein lies the error! The relative frequency definition of probability is only useful when we have sufficiently large experimental data. Most factories consist of production lines. It is the duty of the quality control engineer to examine the products and to make sure that the specifications are met. In a certain factory, the engineer tests 1,000 products and finds that 40 of these are defective. So, the probability of a product selected at
20
A MATTER OF CHANCE
random turning out to be defective is 40/1000 or 0.04. This probability is based on the relative frequency of the occurrence of defective units. It has been arrived at after a large number of trials. Sometimes this probability is expressed as a percentage. In this example, it is 4%. Such periodic checks are mandatory in most industries. A good production engineer will improve his manufacturing process and if testing is done on a regular basis the number of defective products will decrease. The ultimate aim of all factories is to have a "zero defect" production line. The only objection to the relative frequency estimate of probability is that it assumes the random experiment and is always repeatable when performed under the same conditions. This may or may not always be the case. There are many situations in which the measurement of certainty of an outcome is required, but where the experiment is of a "once for all" type. A bank manager would like to know whether he should lend money to a particular industrialist. He is worried about the possible outcome of such an investment. Will it result in a profit, break even situation or a loss for the bank? The manager cannot repeat the experiment of lending money to the same industrialist a large number of times in order to estimate the relative frequency of success! Similarly, an insurance company cannot have an individual person to die repeatedly in order to assess a value for the premium on a life policy! However, bank managers are experienced persons and have seen a large number of industrialists of all kinds in their careers. They have an accumulated knowledge about industries, people, markets for various products, state of the economy, competition and so on. Based on this background experience the manager produces a subjective measure of probability. His assessment might be that the applicant sitting in front of him and requesting a loan has only a 60% chance of success.
MEASURES OF UNCERTAINTY
21
Yes or No ?
Likewise, the insurance company has the medical report of an individual, data on mortality in different age groups and also information about whether the individual is employed in a dangerous profession like driving trucks at night. It is possible to put the information available and produce a subjective measure of probability. This is known as "subjective probability" and was first discussed by a French mathematician de MOIVRE. This definition is based on the degree of belief by a knowledgeable person. As children we also had our own beliefs, just as the bank manager. One often says to friends "I bet you two toffees that
22
A MATTER OF CHANCE
Bombay cricket team will win the Ranji Trophy match". Others might say: "I bet you three toffees that Baroda will win". A time honoured way of measuring an individual or subjective probability is to "lay a bet". We have considered three alternative ways of defining probability and in the process explained how measures of probability are arrived at. The first way was to examine the situation or the problem theoretically and then to estimate its probability of success. One usually makes use of the principle of equal likelihood to the outcomes of such random experiments. A word of caution is that not all outcomes are equally likely to happen.
MEASURES OF UNCERTAINTY
23
The second definition was based on repeating the random experiment a large number of times and then to estimate the relative frequencies of each outcome. Here the relative frequencies need not be the same for all outcomes. As the number of experiments becomes larger and larger, the relative frequency stabilizes and this is the basis for the second definition of probability. The third definition was based on asking an expert for his considered opinion. This is very much a subjective approach. The probability he indicates is a degree of his belief on the success of a random experiment. It is mainly used to describe social problems. To sum up, when we say P (A) =p, it is to be read as "the probability of an event A is given by p". The index p is a positive number having a value between 0 and 1. On the other hand, we may say "A is probable" or "A is likely". These are expressions used only in common conversation. The concept of assigning probabilities to random variables no doubt took a long time to evolve. However, it is a useful and convenient idea while dealing with numerous random phenomena we come across in life.
Models for Uncertainty
t is a common practice in physics, chemistry, engineering and even in economics to use mathematical formulation or models to describe natural phenomena. Interestingly, mathematical techniques developed to describe one family of problems can be used with appropriate modifications to explain other phenomena. So also in probability one can explain various situations in terms of known mathematical expressions. Concepts in probability becomes easy to understand and to expand with the help of mathematical models. When a die is tossed there are six (and only six) possible outcomes. Any number between 1 and 6 has the same chance of appearing uppermost when the die comes to rest. These outcomes can be either listed or represented pictorially in the form of a diagram. This is c o m m o n l y k n o w n as "sample space" for that particular experiment. Each of the outcomes which forms a part of the sample space is referred to as a "sample point". In the case of tossing a die, the sample space consists of six sample points and these are the numbers 1,2,3,4,5 and 6. The sample space can be formally expressed as: S= {1,2,3,4,5,6}.
MODELS FOR UNCERTAINTY
25
When it comes to tossing a coin the sample space is: S= {H,T}, where H denotes a head and T a tail. This manner of representing sample points and sample spaces would perhaps ring a bell in your mind. It reminds you of the set theory and Venn diagrams you leamt at school. A set, we were told, is a collection of objects. These objects are said to be members of the set. In some instances, the members are clearly and explicitly listed.
26
A MATTER OF CHANCE
If R is the set of very important personalities from the Ramayana, then:
As another example, if a is a set of alphabets in the English language, it would contain letters from A to Z. In both the above examples we have clearly enunciated all the members within brackets. There can be no ambiguity. Another way of specifying this set is by mentioning a specific General terminology
Terms associated w i t h random experiment
Sets
Sample space
Elements or members
• Elementary events • Outcomes • Sample points
Sub-sets
Events
MODELS FOR UNCERTAINTY
27
property which its members should possess. The same set could just as well have been expressed as a={x:x is, an English alphabet}. Here the colon (:) after the first x is to be read as "such that". After the colon the properties which x should possess are given. Set (3 containing the five vowels and no other alphabet is said to be a sub-set of the set a . The complement of (3 is a set of consonants containing all letters minus the vowels.
A simple Venn diagram
A considerable amount of simplifications is brought about when we use the Venn diagrams to describe sets and sub-sets. These provide pictorial representation of sets, their sub-sets and combinations of more than one sub-set with another. Let us now correlate our knowledge of sets and Venn diagrams with outcomes of random experiments and their
28
A MATTER OF CHANCE
probabilities. S is the sample space and, as you may have guessed, it closely resembles a set. An outcome of a random experiment is a sample point and this corresponds to an element of a set. An event is a sub-set in the sample space. This analogy between sample space and a set is extremely convenient in manipulating probabilities of complex problems. Venn diagrams are particularly useful representations of sample points in probability theory. For instance, the outcome of tossing a die can be mapped on to a sample space. If a successful outcome of a random experiment is obtaining an odd number then the sample points corresponding to odd numbers could be encircled by another (dotted) curve. The outcome of rolling a die resulting in an odd number coming up is called an "event". As one can easily see this particular event contains three sample points. Since these are equiprobable, the probability of the event is the sum of the probabilities of the sample points which together constitute the event. Hence, in this case, the probability is 0.5. We next consider a slightly more difficult example. The sociology department of a university conducted a survey within the university campus. Married couples were interviewed. Both the husband as well as the wife were asked whether or not they went out to work. Giving the man's a n s w e r first / I and denoting working as W and not work\ ing as U (for unemployed), there are only An event is a subset of sample space four p o s s i b l e
MODELS FOR UNCERTAINTY
29
outcomes of the experiment, namely, (W,W), (W,U), (U,W), and (U,U). Establishing a proper sample space is an important step in solving problems associated with random variables. How important this is can be illustrated quite simply. Suppose a man purchases a lottery ticket. He divides the sample space into two events, viz. he wins and he does not win. He argues
(as many do) that there are only two alternatives - he wins or loses with a probability 0.5. The fallacy lies in the assumption that these two elementary events are equally likely. A sound argument would be based on breaking down the sample space in greater detail, so that each elementary event represent a particular ticket being the winning one. If a million lottery tickets have been sold, then our friend who has purchased only one ticket has a probability of one part in a million of winning!
A MATTER OF CHANCE
30
Continuing on the theme of representing events in sample space, we are often interested in examining the probability relationships between several events. Take the example of a small group of students, four boys and three girls. An oral test has been scheduled by the teachers. It just so happens that two of the boys have taken it easy and are not prepared for this test and so also is the case with one of the girls. All the others are well prepared and are looking forward to the test. The teacher asks a student at random to lead the discussion. He has a choice of one from amongst seven. The sample space for inviting a student to talk first is: S= {Mw, Mw, Mb, Mb, Fw, Fw, Fb} Event W stands for the students selected being well prepared W={Mw, Mw, Fw, Fw} Event B stands for the students selected being badly prepared B = {Mb, Mb, Fb}
ill
> Event M stands for students selected being male M = {Mw, Mw, Mb, Mb} Event F stands for the student selected being female F= {Fw, Fw, Fb}
ili
The notation M stands for male and F for female student. The small letter w means well prepared and b badly prepared. There are a number of events which might interest the teacher. These events can be identified on the sample space. There are seven sample points and each one has the same probability of being selected. The probability of a student being selected at random is 1/7. The probabilities of the four events W, B, M and F are 4 / 7 , 3 / 7 , 4 / 7 and 3 / 7 respectively
MODELS FOR UNCERTAINTY
31
Addition law of probability
The sum of these probabilities do not add up to 1 for reasons which will become apparent soon. At this stage we introduce a yet another concept which has been borrowed from the conventional set theory. We are interested in the probabilities of two events M and B , both of which have been defined on the same sample space. We may want to know the probability of both events occurring together and this is expressed as P(M and B), or the probability of either one of the two or both events occurring which is expressed as P(M or B or both). Both these ideas are important and are commonly used to solve problems in probability. The "Union" of two events M and W is the collection of sample points which lie either in the sample space of M or that of W. This is often expressed as (MUW) or as (M+W). Both expressions are in common usage, but we shall prefer to
32
A MATTER OF CHANCE
use the latter. The sample space of (M+W) contains all those students who are male (whether well prepared or not) or those who are well prepared (whether male or female). There are two Mws which happen to be common to both events. This common region of the sample space is the "intersect" of the two events. The concept of an intersect is represented as (M.W) or (MxW) or (MW). We shall prefer the last representation as it is convenient to do so. In the example above, the intersect (MW) consists of those samples points which are both in M as well as in W. So, this intersect is a sub-set comprising two Mws. Any event which is made up of several other events is called a compound event. Both (M+W) and (MW) are compound events. The probability of the intersect can be calculated by adding the number of sample points in the intersect and dividing it by the total number of sample points in the sample space. Since it is an equiprobable sample space to begin with P(MW) is 2 / 7 . When it comes to estimating P(M+W) we have to be a bit careful. If we were to add the number of sample points in event M and event W, we would have included the two sample points in the intersect twice over. In order to avoid this double counting we subtract the probability of the two points in the intersect. This leads us to the well known "addition" rule for two events M and W. The probability of the union of these two events is P(M+W) = P(M)+ P(W)P(MW). The numeric values are then inserted to find P(M+W). There is one special case of this addition rule which is rather important in probability theory. In the example above, the teacher is now interested in the events W and B. These correspond to students (male or female) who are well prepared and those are badly prepared. The two events do not intersect at all. There are no sample points common to the two
MODELS FOR UNCERTAINTY
33
f | ®> 1 it W ® 14,
f
m
W (well
prepared students)
B (Badly
prepared students)
& <5
I
i
Addition law m o d i f i e d
events W and B. This is obvious as one is either well prepared for the test or not. When such a situation arises, the two events W and B are said to be "mutually exclusive". In such circumstances, the addition rule is conveniently modified to P(W+B)=P(W)+P(B). Let us take one more example. There are five directors on the management board of an industrial concern. They have to elect from amongst themselves two vice-presidents. For convenience let us call them A,B,C,D and E. The possible, outcomes of the election is shown in the sample space. There are only 10 possible outcomes, since the same candidate cannot be elected twice ever. Though the results are not known in advance, yet it has to be from among the 10 combinations listed in the sample space. Since the outcomes are equiprobable, the probability of any one combination turning up as the winner is 1 / 1 0 or 0.1. We are particularly interested in event G where candidate E is among the winning team and also in event H, where
34
A MATTER OF CHANCE
shown in the sample space. It follows that P(H)=4/10=0.4 and P(A)=4/10=0.4. The probability that either or both candidates win, therefore, works out to be 0.7. Quite often the number of sample points becomes large and the above procedure of actually representing sample points becomes time consuming and tedious. In these circumstances, one can make the procedure for calculating the probabilities of events easier by using some basic mathematical rules governing the probabilities of events. But more of that later.
With Strings Attached
V
ui
u
\ A AH J
y i
M f
^ 5J \
Vl
1
91
\ \
f
u
t\
\
£
vr
/7
n
4
71 50
n life success often depends on the occurrence of some other supporting events. For instance, if you are in the business of selling light bulbs, you will soon find that the number of bulbs sold (and hence profits made) depends on how well the bulbs have been manufactured and on how reliable they are. If the supervision in the factory has not been strict, then there are bound to be a large number of defective bulbs being manufactured. The customer will no doubt be upset at having to purchase such a poor brand of bulbs. He will shift his preference to some other brand. So, we see that negligence in supervision has a direct bearing on the profits of the manufacturing organization. The factory manager decides to test some of the bulbs as they come off the production line. His engineers find that from a lot of 100 bulbs tested there are five which are defective. So, on the basis of frequency analysis the probability of a bulb selected at random being defective is P(D) = 0.05. The engineer decides to test yet another bulb from the same lot. What would be the chance that it too is a defective one? A mathematical model for the statement and a solution to problems
A MATTER OF CHANCE
36
of this kind is provided by the notion of "conditional probability". Given two events A and B in a sample space S, the conditional probability of event B given that event A has taken place is denoted by P(B/A). This tells us that the probability of B, knowing very well that A has already occurred. In a manner of speaking, all probabilities are conditional in the sense that they are influenced by the information which was available when they are calculated. However, the term "conditional" is used when a revised set of probabilities is calculated in the light of some information which is additional to all that was known earlier. Suppose one observes a large number N of occurrences of a random phenomenon in which events A and B are defined. Let N A and N B denote the number of occurrences of events A and B . Also, N A B is the number of occurrences common to the two events. A Venn diagram is convenient to depict such a situation. We see that the outcomes which constitute event B are made up of two parts. First, there are those outcomes which occurred when A did not occur. Second, there are those which happened when A did occur (NAB). Thus, only the occurrences of A which are simultaneous with those of B are taken into account. Assuming that the sample space is equiprobable, we have: wa
P(B/A) -
y s both A and B can
occur
Number of ways A can occur In terms of the frequency definition of probability the unconditional probabilities of events A, B and AB are
The conditional probability P(B/A) =
NAB
NAB/N
P(AB)
NA
NA/N
P(A)
WITH STRINGS ATTACHED
37
It is of course assumed that P(A) is not zero or else we are landed with an absurd result of P(B/A) becoming infinite. In our next random experiment we have six identical cards with one of the letters A,B/C,D,E and F printed on one side. These cards are placed on a table with their blank sides facing upwards. The experiment consists of picking up four cards at random and noting the outcome. There are 15 different combinations for picking up the four cards. The individual cards A,B,C,D,E and F are present in 10 combinations. The number of ways in which both A and B are present is six. The probability of finding the card B given that A is also present, therefore, happens to be 6/10. It is interesting to realise that when P(B/A) is being computed, we are essentially computing P(B) with respect to the reduced sample space A, rather than with respect to the original sample space S. When calculating the probability of event B, we are really asking ourselves how probable it is that
A MATTER OF CHANCE
38
i
s WpPj
•A A A * A T
0I
T
Sample space
An ace and a king are present together in six ways
WITH STRINGS ATTACHED
39
we shall be in event B knowing that we must also be in S. On the other hand, when we compute P(B/A) we ask ourselves how probable it is that we are in B knowing that we must also be in A. The sample space shrinks from S to A. Thus, there are two ways of computing the conditional probability P(B/A). The first method is by considering the probability of B with respect to the reduced sample space A. The other method is to first calculate or estimate P(AB) and P(A) with respect to the original sample space S and subsequently take their ratios. This concept of conditional probability can be used to evaluate the probability of the simultaneous occurrence of two events: P(AB) = P(B/A)P(A) We can extend this idea to compute the probability of a single event A. Suppose the sample space is divided into a number of non-overlapping events Bi, B2,..., Bs. Ais the event which intercepts some of these partitions. So, A is the sum of ABi,AB 2 , ...ABS. Some of these intercepts such as ABs may be empty. So, the probability of occurrence of event A is the sum of the individual probabilities P(ABi), P(AB 2 ) and so on till P(AB 8 ). This gives us the total probability of the event A in terms of its intercepting some of the portions Bi, B2, ...Bs. When the two events intersect and P(AB) is their joint probability: P(A) = P(AB)/P(B). Let us go back to our friend, the six-sided die, which we now toss two times. The events A and B are defined as follows: A = (outcome of the first toss is an even number) B = (outcome of the second toss is 5 or 6) Intuitively we guess that the two events are quite unrelated. Knowing that A did occur does not yield additional information about the occurrence of event B, and vice versa. The sample space which includes all possible outcomes con-
40
A MATTER OF CHANCE
tains 36 sample points which are equiprobable. The event A has 18 sample points, whereas, event B has 12. The intersect of these two events has 6 sample points So, we are tempted to say (and quite correctly so) that the events A and B are independent of one another provided P(A/B) = P(A) and P(B/A) = P(B). The probability of A happening remains the same, irrespective of whether B has already taken place or not. It is known the "multiplication" law for independent events. Now we are ready to reconsider the problem relating to the defective electric bulbs which we mentioned earlier. The probability of picking up a defective bulb is P(Di) = 0.05. Now if the bulb is replaced into the lot of 100 bulbs, so that the number remains the same at 100 and a second bulb is selected at random from this lot, the probability of once again selecting a defective bulb remains the same and P(D 2 ) is 0.05. The two events are independent of one another. The probability that both bulbs are defective is the probability of picking up the first defective bulb multiplied by the probability of picking up the second defective bulb. On the other hand, when the first bulb was seen to be defective it could have been kept aside and a second bulb selected at random from amongst the 99 bulbs left over. The conditions have changed. The event of drawing a defective bulb followed by drawing a second defective bulb are now no longer independent events. The probability of the second event depends upon what was the result of the first draw. So P(D 2 /Di) = 4/99. The probability of picking two defective bulbs in a row (without replacement) is, therefore, 4 / 9 9 x 5/100 = 0.002. The values of the two joint probabilities (for dependent and independent) events are seen to be different. An important visitor comes into a classroom and asks a young student "which is the largest town in India". The student looks perplexed for a while so the VIP provides a
WITH STRINGS ATTACHED
41
clue."The name begins with the letter M". A smile appears on the student's face as he replies "Mumbai" and quite correctly so. The sample space (in the student's mind) could have been Delhi, Calcutta, Mumbai, Bhopal, Madras, Baroda, Ahmedabad, etc. The student believes one of these is the largest town but he is not sure. The moment a hint was given to this effect that it begins with the alphabet M, his search narrows down. It is similar to what happens in conditional probability. The sample space has now become Mumbai, Madras, etc. An English philosopher Rev Thomas BAYES (1702-61) was interested in the probability theory. The theorem which bears his name is concerned with twisting conditional probability the other way round. Given a conditional probability of the form P(E/N), Bayes theorem helps us to determine the probability P(N/E). P(N/E) = P(NE)/P(E). Therefore, P(NE) = P(N/E) P(E) Also, P(E/N) = P(NE)/P(N). Therefore, P(NE) = P(E/N)P(N) Bayes theorem: P(N/E) P(E) = P(E/N) P(N) Therefore, P(N/E) = P(E/N)P(N)/P(E)
A well known TV manufacturer has three factories Bi, B2 and B3. The first factory Bi manufactures 200 units each day, whereas the other two factories produce only 100 units each day. It has been determined by the quality control engineers that 2% of the sets manufactured by Bi and B2 are defective, whereas 4% of those manufactured in B3 are no good. When it comes to selling these TV sets, they are all pooled together and then distributed. Now a TV set is selected at random. The first question is: what is the probability that it is defective?
WITH STRINGS ATTACHED
43
Let D be the event that a set is defective and Bi the factory in which it was manufactured. Now the individual probabilities of manufacturing defective TV sets by the above three factories can be calculated. Thus, one can arrive at the probability of selecting randomly a defective set from the pooled lot. The next question is: Given that this randomly selected set is defective, what is the probability that it was manufactured in factory Bi? This can be easily calculated by applying the generalised Bayes theorem. Similarly, in many life situations, Bayes theorem helps us to know the cause of an event. For example, from hospital records of deaths and their causative diseases, Bayes theorem can give us the relative probabilities of the listed diseases which could have caused a particular death. Such probabilities calculated regionally over large samples could help public health authorities to take preventive measures.
o far we have been dealing with rather simple random experiments. Their outcomes xi were said to be discrete. Each outcome xi was assigned a probability P(xi). The constraints imposed on these probabilities are: (1) that P(xi)'s are positive numbers with magnitudes between 0 and 1, and (2) that the sum of the probabilities for all the outcomes add up to 1. We have considered the notion of events as subsets in sample spaces. These events have probabilities depending on the number of sample points contained within them. Mathematical models and graphical descriptions for combinations of events were also described in the earlier sections.
S
Distribution of Chance
3 "S
i
2
As we continue to explore more about probability, we come across yet another new concept and that is "probability distribution". It is not really a new idea as much as a new way of presenting the information or linkages between the outcomes of random experiments together with their probabilities. A simple case is that of tossing a die. A way of expressing the outcomes and their probabilities is: Sample space X = {1,2,3,4,5,6} with the probability of each sample point being 1/6.
DISTRIBUTION OF CHANCE
45
While conducting surveys to find out people's opinion, the questions are usually asked in such a manner that the answer is either "yes" or "no". In a factory, the quality control engineer gets answers to his trials in the form "defective" or "good". A salesman may solicit orders from a hundred potential clients. The results are that orders are received or they are not. In all these examples a common feature is that the outcome has only two possibilities. Unlike in the case of tossing a coin where the probability of head or tail turning up is exactly 0.5, the probability of a "yes" or a " n o " surveys is not so evenly poised. In other words, the outcomes are not equiprobable. Such random phenomena where there are only two possible outcomes are known as Bernoulli Trials. The probability of a success is p and that of a failure is q (p+q = 1). A Bernoulli trial is best exemplified by a couple which decide to have children till a son is born. This sequence may terminate after 1,2,3, ... attempts, there being no limit to the number. Thus, the number of sample points is large. Assume (r-1) girls are born in a row followed by a boy. The experiment stops at that moment, that is, after r trials. Such a probability distribution is known as the geometric or Pascal distribution named after Blaise PASCAL (1623-62). There are occasions when the Bernoulli experiment is conducted n times. What we want to know is: how many successes we are likely to have when we conduct these n trials? There happens to be a mathematical formula which provides an answer to such a question. It is known as the "Binomial Distribution". Suppose the factory inspection staff know from experience that 5% of their products are defective (p = 0.95 and q = 0.05). They select five products and properly test them for all specifications. A sample of five is equivalent to five trials in a binomial experiment, since each trial results in a success or a failure. The inspector's log book might show that the five products tested had the following results SSSFF or in other words the first three successfully passed the tests whereas the last two didn't. Assuming the tests to be independent, the
46
A MATTER OF CHANCE
probability of three successes and two failures can be calculated by multiplying p 3 with q 2 . This works out to be 0.00214. The inspector would be making a grave error if he declared that the probability of two failures is 0.214% and that this is not significant. What he has overlooked is that these two failures and three successes could have happened in altogether 10 different ways with the same probability.
DISTRIBUTION OF CHANCE
SSSFF SSFSF SFSSF FSSSF SSFFS
47
SFSFS FSSFS SFFSS FSFSS FFSSS
Hence the probability of the test resulting in three successes and two failures (the order in which the successes or failures take place is not important) is 10 x 0.00214 or 0.0214 or 2.14 %. In other words, the analysis indicates that there is roughly 2% chance of obtaining two defective products when a random sample of five products are tested. Despite the small probability of 0.05 for a unit proving to be defective, the probability of detecting two defective items from a sample of five is approximately 2%. The production manager has to improve the quality of his products. All retail agencies purchase the products in lots. They approve an entire lot on the basis of testing samples. At this stage we digress a little bit into the realm of permutations and combinations. Without realising it we have already had a taste of combinations in the above example. Suppose we have six boys and they are named (for convenience) A, B, C, D, E and F. The question is: in how many different ways can we arrange these boys to stand in a line? The first boy can be any one from the six and so there are six ways of choosing him. Now we have five boys left over and so there will be five different ways the second position can be filled up. Hence, there are 6 x5 or 30 different ways of arranging the first and second positions. Similarly, there are four choices left for the third position, three for the fourth, two for the fifth and just one for the sixth place. Altogether there are 6x5x4x3x2x1= 720 different ways of arranging these six boys. Mathematicians have given a special name for such a sequence of products. It is known as factorial 6 and a short form way of writing down such a product is: 6! = 6x5x4x3x2x1. Similarly n! = nx(n-l)x(n-2) 4x3x2x1.
48
A MATTER OF CHANCE
To make subsequent arithmetic less confusing, mathematicians have decided to let 0! = 1 and x° = 1. Permutations refer to the number of ways in which a set of objects can be arranged "in order". The word order is crucial. For example, a committee of five people (call them P, Q, R, S and T for convenience) have to elect two persons from amongst themselves to fill up the posts of president and vicepresident of an organisation. There are five ways of filling up the president's position as there are five members available for election. Once this post has been filled up there are only
DISTRIBUTION OF CHANCE
49
four possible candidates left to chose from for the vice-president's position. So, using our commonsense we notice there are 20 ways or permutations of selecting a pair of top officials from amongst the five committee members. These permutations are: PQ QP RP SP TP
PR
QR RQ
SQ TQ
PS QS RS SR TR
FT QT RT ST TS
Of course, it is presumed that the same person cannot be elected to both the positions and hence PP, QQ, RR, SS and TT are ruled out from the above matrix. Mathematicians have a formula for everything! When it comes to permutations they use the symbol n P r to mean the number of permutations of n objects out of a r are taken at a time. In the above example, n = 5 and r = 2. n
P r = n!/(n-r)! = ( 5 x 4 x 3 x 2 x 1 ) / ( 3 x 2 x l ) = 5 x 4 = 20.
This is exactly the number we obtained by organizing the matrix above. Only we based our earlier estimate on commonsense. In the matrix also, the first letter stands for the president and the second for the vice-president. The two positions are different in hierarchy Consequently, the order in which the two persons have been elected is important. Suppose we had to elect two vice-presidents. Now, we are interested in which two members are elected and the order is of no consequence. For instance, announcing that PQ or QP have been elected makes no difference since both have been elected. So when two people have been elected without regard to their arrangement, then this "unordered" selection is called a "combinat i o n " . The m a t h e m a t i c i a n ' s w a y of e x p r e s s i n g this combination or unordered selection is:
50
A MATTER OF CHANCE
"Cr =
n! r!(n-r)!
TV r!
=
In our example:
5c2 =
li = 10
When it comes to electing two vice-presidents the matrix shrinks in size to PQ QR RS ST
PS QT
PR QS RT
PT
Combinations are quite important when it comes to the binomial probability distribution. We calculate the probabilities by listing all the ways in which a particular outcome is arrived. We conduct n independent trials where the probability of success is p and that for a failure is q. The question we ask is: what is the probability of obtaining r successes? The answer is provided by the expression: P(r;n,p) = "C r p r d - P r . The coefficients n C r have a special pattern and their values can be calculated or one can look them up from the Pascal's triangle. The first few lines of this triangle are: n=
1
0 1
1
2
1 \2 / 1 3
3 1
4
5
1
1
3
4 \ 6 5
1
10
1 4
v
i(>
Pascal's triangle
/I 5
1
DISTRIBUTION OF CHANCE
n
51
The binomial distribution, although relating to a simple situation of an experiment with only two outcomes, has wide applications. Frequently, in random experiments with several outcomes, we find that our interest is in one particular outcome. That outcome would be a success and all others are failures. Now, the binomial distribution helps us to determine how many successes we can expect when we conduct a trial n times. For example when we throw a die we could (arbitrarily) decide that a throw of five is a success and all other outcomes are failures. In this situation, p = 0.167 and q = 0.833. If the die is tossed 10 times (n=10), then the binomial distribution for r success is given by 10 C r p r q10"r. Another example of a binomial distribution comes from a doctor's nursing home. The doctor knows from long experience that when a particular epidemic breaks out the probability of a patient surviving is p = 0.9 and that of his passing away is q = 0.1. Now, on a particular day there are exactly 20 patients in his nursing home (n = 20). It is difficult to say which patient will survive and which will not. The actual number of survivals can be estimated using the binomial distribution. Probability of all surviving P(r=20) = 2 °C2o(0.9) 20 (0.1)° = 0.12 Probability of 19 surviving P(r=19) = ^ C i g t O ^ P W . l ) 1 = 0.27 Using these two probabilities we can estimate the probability of not more than one patient dying. This is the sum of P(r=20) and P(r=19), which is = 0.12 + 0.27 = 0.39.
As a final example of the binomial distribution, let us consider the case of an elderly lady who claims to have a special expertise. After tasting a cup of tea she can tell whether tea was added to milk or milk to the tea. So, an experiment is set up with 10 pairs of cups. Each pair has one cup each made by the two methods. Let p be the probability that the lady will correctly classify the pair of cups. The probability that the lady will correctly classify r cups of the
52
A MATTER OF CHANCE
M i l k in tea or tea in milk?
10 pairs can be estimated using the binomial distribution. If we grant the lady extraordinary powers, so that she correctly classifies at least 8 of the 10 pairs of cups, the probability that the lady will correctly classify at least 8 pairs of cups can be similarly calculated. If the lady is only guessing, the probability of correct classification would be very meagre. But, if she has real talent, the probability can approach 1. We have seen how the binomial distribution can be applied to various problems. It must be appreciated that the formula is valid provided p remains (practically) unchanged during the entire experiment. Let us imagine we have in a city of 100,000 population exactly 120 people suffering from AIDS. We gather 1000 citizens and subject them to the medical test. What is the
DISTRIBUTION OF CHANCE
53
probability of having r = 0,1,2, etc., that is, people who turn out to be suffering from the ailment. Here p = 0.0012 . Calculating P(r) in this case is a bit difficult. Fortunately for us there is a short cut to help us out. There is one alternative known as Poisson distribution and it finds many applications in scientific, communication and biological fields. This distribution was first published in 1837 by POISSON (1781-1840), a French mathematician of great repute. Just like the binomial, the Poisson distribution arises from a simple probability model. It is usually described as the probability law of the number of success in a large number of repeated trials, each with a very small probability of success. Here the events happen randomly in time at an average rate of m events per unit time. The unit of time may be years, weeks or seconds, depending on the application. Poisson distribution tells us the probability of 0,1,2,...,r events happening in that unit of time. It is given by the expression: „ e~mmr P(r)= — r! Here e = 2.718 is a constant. For small values of m and r the Poisson probability can be quickly evaluated with the help of a pocket calculator. For larger values the calculations become tedious but there are tables available which enable us to estimate P(r), given m and r. The formula can be modified if we want to determine the probability of r successes in t units of time. Then in place of m we use (mt). It can be shown that for large values of n the binomial distribution reduces to a Poisson distribution. The example on AIDS can be simplified using the expression for Poisson distribution. Here in place of m we use p times the sample size which is 1000. The values of P can be
54
A MATTER OF CHANCE
0.5
m=l/2 P(r)
0
1 2
3
r
4
1
m=l
P(r)
0
1
2
3
4
5
m=2
P(r)
0 1 2 3 4 5 6 7 Poisson distribution — A well known probability model
tabulated, so that the problem reduces to one of looking up the table. Poisson distribution has a curious history. About 150 years ago it came to the notice of the German army that one of the
DISTRIBUTION OF CHANCE
55
causes of accidental deaths was on account of horses kicking the soldiers. A mathematician amongst the officers kept an account of such deaths. He used this data to prove that such accidental deaths followed the Poisson distribution. They were astonished that a law discovered by a French man was applicable to their horses! In today's highly mechanised world the horses have given way to automotive vehicles. At a busy intersection a policeman on duty collected some data and concluded that an individual vehicle could be involved in an accident with a probability p=0.0001. Now during the peak traffic hours between 4 and 6 PM the number of vehicles at the intersection is about 2000. Under these conditions what is the probability that there will be 0,1,2,3 ... accidents. In the two-hour period, the average number of accidents is expected to be px2000 = 0.2. Using the Poisson distribution and tabulating the values of P, we get the probabilities of 0,1,2 ... accidents. During the two-hour period the probability that there will be no accidents P(0) is 0.8187. Consequently, the probability that there will be an accident (one or more) is 1 P(0) which is 0.18. So far we have been talking about discrete random experiment, their outcomes and the probabilities associated with these outcomes. It is relatively easy to estimate the probability of a discrete outcome on the basis of a priori or relative frequency considerations. Now, there are many situations where the number of outcomes is so large and close to one another that they take an almost a continuous form in the sample space. The probability distribution also assumes the shape of a continuous curve. These continuous probability distributions that require a more careful definition than that for discrete cases. The difficulty arises when attempting to allocate probabilities to an infinite number of outcomes in such a way that the sum of all their probabilities add up to 1. In the famous casinos all over the world, roulette is a popular game. The American version of the game consists of a ball rolling round and round in circles before settling down
56
A MATTER OF CHANCE
in a slot. There are 38 such slots which are numbered from 1 to 36 and the remaining two are numbered 0 and 00. Since the ball can come to rest in any one of the 38 slots with equal probability the distribution graph consists of 38 vertical lines associated with the discrete random variables.The probability of each of these outcomes is 1/38 or 0.026. Another version of the game is that instead of slots the circumference is partitioned into 38 equal regions and a pointer spins around at the centre. Now, there are infinite number of points on the circumference at which the pointer comes to rest. Since a circumference is divided into 38 intervals (and these are equiprobable), the probability of the pointer stopping in any of these intervals is 1 / 3 8 or 0.026. This is an example of a continuous random variable and the probability distribution is box shaped. Such box shaped distributions are known as "Uniform" probability distribution for obvious reasons. When dealing with continuous random variables we consider not the probability P(x,) which will take some specific value of the variable x, but the values it would take when we consider however small an interval or region of the variable. An example of a uniform distribution is when you wait for your food to arrive in a restaurant. A waiter comes along and takes your order. Now, let X be the random variable representing time in minutes after the order has been placed for the food to arrive and you are served. Past experience indicates that it never takes less than 10 minutes and never more than 20. You are likely to be served anytime between these two limits, that is, 10 minutes. So, the probability that you may have to wait from 13 to 16 minutes works out to be p(x)x(16-13) = O.lx 3= 0.3. Hence, the probability that you will be served between 13 and 16 minutes after the order has been placed is 0.3. A passenger arrives at a railway station anytime between 8 AM and 9 AM with the same probability. This arrival at the
DISTRIBUTION OF CHANCE
57
(a)
(b)
Probability of an outcome in roulette may be a discrete (a) or a continuous random variable (b)
station follows a uniform distribution and p(x) is 1/60. Now the trains which could take him to his destination leave at 5 minutes intervals, that is, at 8.00, 8.05, 8.10,... 9.00 hrs. What is the probability that our friend does not have to wait longer
58
A MATTER OF CHANCE
than 2 minutes for his train? Now there are 12 trains of interest. He has to arrive at the station between 8.03 and 8.05, 8.08 and 8.10, and so on. So we are interested in his arriving at the station only in the above intervals of time and these add up to 24 minutes. The probability that he does not have to wait longer than 2 minutes at the station to catch his train is 2 4 x 1 / 6 0 = 0.4. There happens to be a very important continuous probability distribution which is the "Normal Distribution". Its importance derives from the fact that many kinds of random variable and also data tend to follow the normal distribution. Imagine a trial of tossing 16 coins up together. We wish to know the success of obtaining 0,1,2,... 16 heads. Here, we could use the binomial distribution where n=16, p=G.5 and q=0.5. These can be evaluated for all the r's and plotted as a histogram. If we were to repeat the experiment a 1000 times and superimpose the new histogram over the earlier one, then we find that the new curve is similar to the earlier one. In fact, the new one appears smoother. Both the histograms approximate closely a mathematical curve known as the "Normal Distribution". Its main features are: *
It is symmetrical and has a bell shape.
*
Its two tails continually approach the horizontal axis without ever touching it.
It is sometimes called the Gaussian distribution after the famous German mathematician Frederick Karl GAUSS (1777-1855). It was also mentioned by de Moivre in 1733 as a limiting form of the Binomial distribution. It was rediscovered by Gauss in 1809 and Laplace in 1812. In the expression for the Gaussian distribution there are two important parameters m and a. The former is the mean and a is the standard deviation. Changing the value of m merely involves transportation of the curve from left to right. On the other hand, a affects the shape of the curve. Whatever
59
DISTRIBUTION OF CHANCE
Mean = 50 Standard deviation = 10
20
30
40
50
60
70
80
X
Normal distribution curve is bell-shaped as proposed by Frederick Karl Gauss (inset)
the value of the standard deviation, the general form of the curve remains symmetrical and bell shaped. The total area under the curve is unity. Fortunately, one does not have to deal with Gauss' equation too often. Tables are available which provide the probability densities. Useful information can be derived knowing m and a. If we know these two parameters, then the normal curve can be drawn accurately. Interestingly, 68% of the value of the random variable lie within one standard deviation on either side of the mean. Approximately 95% lie within two standard deviations and nearly 99% within three standard deviations from the mean.
60
A MATTER OF CHANCE
Time and motion study is undertaken to improve the efficiency of production in factories. Experts observe the workers and with the help of stopwatches find out the time required to do a job. They also suggest better ways of doing things. In a radio factory it takes on an average 48 minutes to assemble a radio (m=48 minutes). Measurements also show that a = 12 minutes. Based on these two pieces of information and assuming that the actual time required to assemble radio follows a normal distribution, one can conclude that 68% of the radios are probably assembled within ± 12 minutes of the mean, that is, from (48-12) to (48+12) minutes, or in other words, they are assembled from 36 to 60 minutes. Of the remaining 32% of the radios about half have taken in less than 36 minutes to assemble and the remaining 16% take more than 60 minutes. Based on this information the management can distribute incentives or bonus for quick assembly and penalties for those who took longer times! A mathematician-cum-farmer grows apples in his orchard. He finds that the weight of the apples follows a normal distribution with a mean weight of 200 gm and a standard deviation of 25 gm. He picks an apple at random. Now what is the probability that it weighs between 150 gm and 225 gm? Here, we find that the area under the normal curve is from X = m - 2 a and X = m + a . The portion of the curve from ( m - 2 a ) to m has an area of 45% and that between m to ( m + a ) is 6 8 / 2 or 34%. So, the probability is 4 5 + 3 4 = 79%. If we wanted to know the probability that it is an oversized apple (x > m + 2 a ) then it is (1009 5 ) / 2 or 2.5%. There is one more continuous probability distribution which is rather important in communication and reliability of equipment studies. We have already seen that the Poisson distribution is a discrete one and it tells us how many out-
DISTRIBUTION OF CHANCE
61
'75
200
225
m= 200gm 0" (SD) = 25gm
The weight of apples in a tree follows normal distribution (inset)
62
A MATTER OF CHANCE
comes one can expect in unit time. The average number of outcomes per unit time is m. We can go a little further and find out the time intervals between the outcomes which follow the Poisson distribution. It is a function of time and a continuous one at that. It is also known as the waiting time for the next outcome. It is a simple decay function which reduces exponentially with time.
Markov's Frogs
lake is overgrown with lily plants well known for their large leaves which float on water. Now, there happens to be a frog sitting on a leaf. Regularly every minute, that is at times t = 1,2,3,... n minutes the frog jumps into the air and lands on another leaf but never into the water. Which particular leaf it lands on is entirely a matter of chance. When the frog is on leaf a, we say it is in state S a . It next jumps and lands on leaf b and so it has moved to state Sb and this process continues. The state Sb could be anywhere on the lake. So, at t = l,2,3,...n minutes, the frog would be in states S a / Sb, S c ,...S n . This specifies the frog's entire itinerary for the day. What kind of information can we hope to derive from this itinerary. We could assign a probability that it is in state Sn after n jumps. This probability would be a conditional one and would depend on all the previous states the frog has been in. Such systems where the state changes at intervals of times are known as stochastic processes. A.A. MARKOV (1856-1922) was a celebrated Russian mathematician who was interested in finding solutions to just such problems. How far the frog is from its starting point may not be
64
A MATTER OF CHANCE
Back to square one. What a waste of time!
of great consequence, but there are numerous similar problems in real life to which he tried to find solutions. Volumes have been written about his work. He had to simplify the structure of problems in many instances. We will now have a look at some of the simpler Markov processes. An interesting example of Markov process is related to the manner in which we speak and produce sounds which in turn convey meaning to the listener. The large number of sounds which we produce with the help of our vocal mechanism are known as phonemes. By stringing together the phonemes we produce words. There are about 44 phonemes in most languages. Some of these are known as vowels and the remain-
MARKOV'S FROGS
65
Combination of Phonemes At time
(t-1)
lal Ibl Icl' Idl
t
(t+1)
*lak
lal
Ibl Icl Idl
*lbN
Icl Idl
ing as consonants. The latter are shorter in duration compared to vowels. Also, they contain less sound energy compared to vowels. Consonants and vowels by and large follow one another, that is the probability of two or three successive consonants is not very high. By analyzing spoken languages, researchers have determined the transition probabilities, that is the probability of moving from one particular phoneme to another. Once a phoneme has been uttered, then it is followed by a few other phonemes. This constitutes a conditional prob-
66
A MATTER OF CHANCE
ability (Pij). This means that at time t we are in state j (or phoneme j), given that at time (t-1) we were in state i (phoneme i). Based on the data collected for a particular spoken language we can draw up an array of probabilities, that is Pij, for moving from state Si to state Sj. Coming back to the frogs again, Markov placed some restriction on their movement. All the lily leaves were placed in a row. The frog now jumps to the leaf on its left with a probability p or to the one on its right with a probability q. It cannot jump over a neighbouring leaf to one beyond. Consequently, the probability of the frog being in state S n depends on its immediate earlier state Sn-i and not on Sn-2 Or bn-3- The probability of a transition is P(S n /S n -i). Problems which come under this category are known as "Markov dependent Bernoulli trials". The expression Bernoulli trial comes in because,
Markov dependent Bernoulli trial
MARKOV'S FROGS
67
the frog has only two alternatives. It can jump to its left or to its right. This example of the frog may appear a bit abstract but there happen to be a number of interesting situations in real life. Take the case of a weatherman in Cherapunji, where it rains quite a bit. He kept records of whether it rained or not every day and attempted to correlate each day's happening with that of the previous days. If s stands for success of rain and f for its failure, the weatherman's observations constitute a series of Markov dependent Bernoulli trials. The transition probabilities follow the pattern: P(ss), that is probability of rain tomorrow given rain today; P(sf), no rain tomorrow given rain today; P(fs), rain tomorrow given no rain today, and P(ff), no rain either tomorrow or today. A question which any one might ask is whether it will rain day after tomorrow given rain today. The probability of such an eventuality is expressed as P2(ss), the subscript 2 standing for the second day after today. Now, the actual situation could be that it rains today and tomorrow, P(ss), followed by rain day after tomorrow, again P(ss). Alternatively, it might not rain tomorrow, P(sf), and this is followed by rain day after tomorrow with the probability P(fs). In such a case, P2(ss) = P(ss) P(ss) + P (sf) P(fs). Consequently, the probability that it rains today and remains dry day after tomorrow is given by P2 (sf) which is equal to 1 - P2 (ss). So long as the data collected are spread over a long time, the probabilities P(ss), P(ff), P(sf) and p(fs) could be reasonably reliable and one can estimate whether it rains or does not over the two-day period. We can use a similar method to solve an important problem in modern telecommunication. These days communication signals pass through electronic equipment and channels in the digital format. The signal is in the form of a binary digit or a bit, that is 0 or 1. A 0 can enter a channel and come out as a 0, but due to some disturbance it may be altered on its
A MATTER OF CHANCE
68 SUN MON TUE WED THU FRI SAT
SUN MON TUE WED THU PRI SAT
1
1
8 15 22 29
2
3
4
5
S
7
2
3
4
•
5
6
7
SUN MON TUE WED THU FRI SAT 1
2
3
4
5
6
7 14
9
&
11
12
13
14
8
9
10
12
13
14
8
9
10
11
16
17
18
@
13
19
20
21
15
16
17
18
19
20
21
15
16
17
18
19
20
21
23
24
25
26
27
28
22
23
24
25
26
27
28
22
23
24
25
26
27
28
29
30
31
29
30
31
30
31
Rain or n o rain?
MARKOV'S FROGS
0
I
0 .
69
0
0
P(0 0)
P(0
1 P(01)
0)
0)
P(0
0)
0
1 P(ll)
0
0 P(0
P(10)
P(0
0)
A good communication system (top) and not a good one (below)
way and come out as l.If s and f in the case of rain at Cherapunji are substituted by 1 and 0, P(00) means a 0 entered the channel and came out as a 0, as it should. Both P(00) and P ( l l ) should have the same value of probability, say p. So, naturally, P(10) and P(01) should have a probability q. If there are two communication channels in tandem (one after the other), then we want to know the probability P2(00), that is the probability of entering the first channel as a 0 and coming out of the second channel as a 0. Following the 'Cherapunji model' we have P2(00) = p 2 + q 2 . If for a certain case we assume p = 1 / 3 and q = 2 / 3 , we find that Pz(OO) is equal to 0.55. There is a serious point to consider at this stage. The 0 which went in could have got converted into a 1 and in the second stage this 1 could have come out as a 0. As far as good
70
A MATTER OF CHANCE
engineering is concerned, a 0 must remain a 0 through both the stages of the communication channel. The probability of this eventuality is P 2 (00) =P(00) x P(00) = 1 / 3 x 1 / 3 = 0.11. In a digital communication system, the credibility of the signal through all its various stages must be maintained. This is not a necessary requirement for the weather forecaster. Let us now see how Markov's work helps a businessman in marketing. In a particular city, there are only two kinds of toilet soap available and let us call them C (for its colour) and W (for being white). There are also some other differences which make the citizens prefer one to the other. An independent survey reveals that any one person currently using C will repurchase C the next time with a probability of 0.8 (since it seems to agree with him) or he will purchase the brand W with a probability of 0.2. Aperson patronizing brand W, today, might change his preference the next time with a probability 0.3 or continue with the same brand with a probability of 0.7. This is a two-state Markov process. The question is what will happen after a sufficiently long time? The preferences would stabilize and it is seen that C will have 60% share in the market and W will have 40%.
et another way of looking at problems related to probability is by repre senting them in the form of a tree diagram. The method is useful as it clarifies the structure of a problem at its various stages.
Y
Barking up the Tree
This is best explained by an example. A sports goods manufacturer tests the cricket balls he produces in his factory. All the balls are supposed to weigh exactly five and a half ounce or about 156 gm. It has been observed that four balls from a batch of 50 are defective, that is they are e i t h e r u n d e r w e i g h t or overweight. After testing, the ball is put back in the lot and mixed up properly. A n o t h e r s a m p l e is picked up and weighed. This is repeated three times. If the three samples turn out to be all defective, then the entire sample batch of 50 is rejected. This problem can be represented by a tree diagram which shows what is happening at each stage. A defective ball is denoted by the letter D and a good one by G. The probabilities are indicated at each of the three stages. This is a case of testing with replacement and consequently P(D) = 4 / 5 0 and P(G) = 46/50. After the first test there are two ways the joint
72
A MATTER OF CHANCE
P(D)
P(G)
^
Defective Stage 1 Good
Stage 2
Stage 3
A simple tree diagram.
events P(D) and P(G) can happen. After the second stage there can be four steps and after the third, eight. The probability that all the three balls are defective and so the lot is rejected is 5 in 10,000 and that they are all good is 0.77. There is not much saving in time required for calculations. But some people prefer the tree diagram approach as they can visualize the next step which is taken at each stage. The same tree structure can be used to convey the concept of conditional probability. Here we don't put the ball which has been tested back into the lot and mixed. It is kept aside.
BARKING UP THE TREE
73
The tree diagram is similar, but we use conditional probabilities as indicated in the figure. The probabilities keep changing, depending on the outcome of the previous stage. The probability of all these balls being defective is 0.0002, which is less than in the earlier case of putting the balls back in the lot. The tree diagram can also be used for solving business problems. In the business world, such diagrams are also known as decision trees. All business interests are to maximize profits and minimize losses. Consider an example of marketing strategy. A company wants to have an effective strategy to launch a new consumer product. It can advertise in the newspapers, journals or on the television. The experts or the wisemen in the company anticipate an expenditure of Rs 30 lakh towards TV advertisements and that the probability of its success is 0.75. In case the TV advertisements are successful, then the potential payoff in the first year is Rs 200 lakh. On the other hand, if the TV campaign fails then the company stands to lose the 30 lakh rupees it spent on this advertisement. With regard to newspaper coverage, for which the company spent Rs 10 lakh, the probability of failure or success is 0.5 and it can go either way If the press campaign is successful, then the payoff is Rs 100 lakh whereas the loss would be Rs 10 lakh for a failure. The Managing Director wants to know whether he should go in for TV or press advertisements. The problem can be neatly set out by means of a decision tree. The tree shows the logical or natural progression in the management decision process. The expected payoff values for each outcome is the product of each payoff and its associated probability of success or failure. Thus, the expected total payoff from each marketing strategy is given by the sum of their expected payoffs for success or failure. In this example, the Managing Director will recommend the adoption of the TV campaign.
74
A MATTER OF CHANCE
Decision-making has its own problems. What is good for the manufacturer may not necessarily be a good decision for the purchaser of the goods as this example reveals. A factory produces electric switches and these are sold in boxes of 10. One can expect that there will be some defective switches along with good ones. The purchaser wishes to buy only good switches but he cannot test each one of them. The vendor, on the other hand, is trying hard to push his switches into the market. An expert suggests two procedures for their consideration: Plan A and Plan B,.
BARKING UP THE TREE
75
1.0 0.8 Plan A
0.6 0.4 0.2 0
2
4
6
8
10
1.0 0.8 0.6 Plan B
0.4 0.2
0
*
4
6
8
Operating characteristic curves for Plan A (above) and Plan B (below) P(d) is the probability of acceptance and d is the number of defective switches in a box
Plan A consists of selecting two switches at random from a box. If both are satisfactory, the box will be accepted by the purchaser. Otherwise the entire lot is rejected. Plan B consists of selecting two switches at random from the box and testing them. If both are satisfactory, then the lot is accepted. If both are defective, then it is rejected. If one is good and the other is bad, then a third switch is tested. If it is satisfactory, then the lot is accepted. Otherwise, it is rejected. At this stage, the two parties consult a mathematician and he not only draws the tree diagram but also works out the operating characteristics.
76
A MATTER OF CHANCE
Now, in this problem, there is no information about the percentage (probability) of defective switches. We assume that acceptance which is P(d), where d is the number of switches (from a box of 10) are defective. The acceptance for different values of d are plotted and this is known as the operating characteristic or OC for that particular plan. After examining the two plans and their OCs, it appears that Plan A is preferred by the purchaser whereas the seller would like Plan B to be adopted.
The Uncertain Engineer
eliability as a human characteristic is an adorable quality. Everyone appreciates and admires a reliable .person. Reliability is synonymous with trustworthiness, dependability and consistency. A person is said to be reliable if he accomplishes correctly the task assigned to him. Since it is a subjective issue it is difficult to quantify it, though we often compare people on the basis of reliability. We often say that the person A is more reliable than B. It is not surprising that the concept of reliability soon got to be applied to machines and their components. Just as people let down others at crucial moments so also machines are known to fail when they are most needed. If a simple electric switch gear in a large chemical factory were to malfunction, it could possibly lead to expensive materials going waste and even poisonous fumes to be released into the atmosphere. The factory might lose a considerable amount of money and even put human life into jeopardy. Engineers are concerned with the duration of the useful life of components and systems of components. The study of this subject
78
A MATTER OF CHANCE
is called reliability theory. Its importance has been highlighted by the recent disasters in nuclear power stations, satellite launching rockets going haywire, and so on. Reliability is associated with the probability that a system operates and functions as originally specified. Asystem could fail due to many reasons including faulty components. Factory records help us to determine how often parts, subsystems and entire systems fail. The "failure rate" m, is defined as the number of failures per unit time, which may be in days, months or years. Assuming that failures happen in accordance with Poisson distribution, we can say that the time
THE UNCERTAIN ENGINEEk
79
interval between failures follows the exponential law. A system which fails often (that is, with large values of m) is certainly not a reliable one. The time intervals between failures should be as long as possible. A question which is often posed of any system is: "Is it reliable enough"? PARAMETERS OF RELIABILITY Failures per unit time = m Poisson distribution of r failures in time t = (mt) r e~ mt /r! Time between failures p(t) = me
mt
F(t) or^robability of at least one failure between time t=0 to R(t) or reliability or probability that there are no failures from t=0 to t= e Time to repair = Tr Fractional dead time D = Mean time in failed state/Total time = mTr/l+mTr Mean availability A = Mean available time/Total time = 1/1 + m Tr
Reliability is the probability that a system survives until time t. Take the case of a digital control system. It contains 10 printed circuit cards. It is known from experience that each board fails once in two years. Service people have to be called in to put things right, but they are expensive. For budgeting purposes the management wants to know how many service calls to be expected in one year. The overall failure rate m= 0.5 x 10 or 5 breakdowns per year. The probability of at least one breakdown in the next three months ( 1 / 4 year) can also be estimated by using the formula for F(t). Another example is that of an electric supply system the failure of which causes
80
A MATTER OF CHANCE
loss of supply to consumers. The mean time between such failures is known to be 45 hours, that is failure per hour m = 1/45. If the mean time to repair is five hours, the mean availabilityon calculating comes out to be 0.9. That is the system would be in working order 90% of the time. Reliability has been expressed as the probability of system or a subsystem being functional. Obviously, this probability depends on how long the system has already been in use.The number of failures per unit time m does not remain constant during the entire life time of the system. It has a higher value during the early stages of the system's life. After a while m remains constant and once again it rises towards the end of its useful life time. It has a shape similar to that of a bath tub. It is interesting to note that failure rate and the probability of death follow a similar bath tub shape. There are two other factors in reliability engineering which will be of interest to everyone. One is the mean time between failure or MTBF. It applies mainly to repairable systems like the electricity supply or distribution system. If the overhead cables come down due to strong winds, then the lines have to be repaired on site. Managers of such systems have to carry lots of spare parts as well as experts who can locate and repair faults. The other term is the mean time to failure or MTTF. It applies to such systems which are of the non-repairable type such as rockets. MTTF is the average time an item may be expected to function before a catastrophic .failure. At best the subsystem or system itself is replaced when the MTTF has been reached. Managers of such systems usually carry spare subsystems and even spare systems in their inventory. The defence services invariably have extra aircrafts, tanks and vehicles to take the place of those destroyed in an encounter. Large systems involve heavy investments. It is reasonable to expect a successful record of operations from them. This is another way of saying that the system should have a high degree of reliability. Systems such as the telephones, electri-
THE UNCERTAIN ENGINEER
81
Failure rate
Wear-out period
Wear-iti period Constant
failure-rate
Time
Probability of a s y s t e m b e i n g in working o r d e r follows a b a t h tub shape
city supply, hospitals and police would put a large populatioh to inconvenience if there are failures. If the defence system does not function at the crucial moment, then the country is in trouble. Such systems which involve heavy outlays must be absolutely dependable. There is a cliche "You get what you pay for" and there is some truth in it. Cheap or poorly engineered items are more prone to failure than the expensive variety. However, there is "safety in numbers" and redundancy. This often leads to paralleling of subsystems. First; let us see what happens when we link subsystems in series. These can individually fail in a random manner but,
t.
A MATTER OF CHANCE
82
whenever one component has failed, the entire system comes to a standstill. The overall reliability R for the system is the product of the reliabilities of the individual components. Next, let us consider the case of a large wide bodied jet aircraft which is used to transport passengers from one country to another. It uses four identical engines, each one having the same reliability. The aircraft requires two engines to be in operation for take off but it needs only one to be operating A
Engines
B C D
A B C D
.Of the four engines, ap aircraft requires two for take off and just one for landing
THE UNCERTAIN ENGINEEk
83
for landing. Yet,for safety it generally has four engines. The reliability of such systems with subsystems in parallel can be analysed by the use of the binomial probability distribution. Assuming the reliability of each engine to be 0.9, the advantage of having four engines is that the overall reliability is now quite high at 0.995. But the cost of the system goes up by 100% as the aircraft now has four engines instead of two. Based on such studies of reliability the renewal and replacement policies can be initiated by the management.However, in view of the improvements in technology and consequent reliability of the modern engines, the newer aircrafts have only two engines.
hen the second world war commenced in 1939, the Germans were well prepared for it. They had many aeroplanes, ships and tanks, which came in handy to push England, France, Holland and other countries into a tight corner. At this stage the scientists, economists and industrialists in England rallied around to assist military operations.
Chance and Management
Eventually, a new discipline developed and it came to be known as Operational Research (OR). In America, it is known as Operations Research. It immediately started yielding results. This subject was based to a large extent on probability theory. Basically, it enabled the military to organise their operations in a manner such that the impact on the enemy was maximum, whereas the probability of their own losses or wastage of resources was minimum. There was a marked improvement in the effectiveness of military activities s u c h as night bombing of enemy targets, antisubmarine warfare, optimising the size and composition of convoys, and so on. This subject became so popular that after the war it came to be used in managing industries, organising large construction ope r a t i o n s and e x p l o r a t i o n of
CHANCE AND MANAGEMENT
85
mineral and oil deposits. There are so many uncertainties involved in such schemes. Things get delayed on account of labour unrest, fuel shortages, materials reaching wrong destinations, and so on. The probabilities of such individual components follow the Poisson, Uniform or the Normal distributions. OR involves many distinct stages. To begin with the main features and their relationships are studied. Mathematical models are set up to describe these features and their relationships. The models are easy to manipulate using computers these days. The effects of changing one parameter (say, delay
86
A MATTER OF CHANCE
in delivery) on the overall performance can be evaluated. These are known as sensitivity studies. Having arrived at a suitable model the OR scientists optimise resources (money materials and manpower), so that the objectives of the entire operation is achieved. Often, there are constraints such as lack of skilled manpower or specialised machinery and these have to be incorporated in the overall plan. Broadly speaking, OR approach is an effective and a systematic method of dealing with a wide class of managerial problems. In the course of finding solutions, many decisions have to be taken. Probability of success is associated with every decision. If one is managing a large hospital for example, the important features are the patients, doctors, nurses, stock of medicines, facilities such as X-ray, CAT scan and ICU and above all finances. The manager has to manipulate all these features to ensure that the reputation of the hospital is kept high and that everyone is happy. There is no point in having a huge stock of medicine, thereby locking up funds which could be better utilized. There must be the right number of doctors in the OPD or else long queues will form in front of the hospital. An important task of the manager is to take decisions based on his earlier experiences. In the i360s, the American navy was planning the Polaris missile programme. Intercontinental nuclear missiles were to be fired from submarines which lie submerged under the seas. The US navy evolved what came to be known as the "Programme Evaluation and Review Techniques" or PERT to complete the entire project within a time schedule. It has, now been adopted by industry and government in most parts of the world. In fact, no one accepts a project proposal unless a PERT chart accompanies it. PERT can be applied to any task which calls for a planned, controlled and integrated effort to complete a project on time. The range of applications varies from the preparation of a
CHANCE AND MANAGEMENT
87
o o r
PATIENT
.DEPTT.
simple meal to the construction of nuclear power station. PERT enables potential bottlenecks to be identified and the theory of probability to be employed to quantify the uncertainties. In this way the manager is helped in making decisions for which he is responsible and his attention is focussed upon the critical aspects of his problem. Milestones are represented in PERT networks by circles. Arrowed lines are drawn between the circles to indicate the direction of the flow of activities. An event is a highlight and
88
A MATTER OF CHANCE
Clean rice
Start Pressure cooker
Clean dal Cut onions tomatoes 6 chiHie s
A PERT network for
it indicates that all activities leading to it are complete. The arrowed line between events indicate an activity, such as procurement of raw material, getting the right manpower, actually fabricating an equipment and so on. PERT requires three estimations of time required to complete different activities. First, the "most optimistic time" Ta to complete an activity is estimated. Here, it is assumed that everything happens without delay (no labour problem, no shortage of money, etc.). Then we estimate the "most likely time" Tm corresponding to the mode of the probability distribution of the time required. Then we estimate the "most pessimistic time" Tp, defined as the longest possible period of time which would be required if all the factors were to be adverse. On the basis of these three estimates a new estimate known as the "average or expected time" Te is calculated using the formula: Te = (Ta + 4Tm + Tp)/6
CHANCE AND MANAGEMENT
89
Stop Pressure cooker
Remove and lay rice on table
Lay dal on table Remove dal
Fry
m a k i n g a simple m e a l
After an estimate of Te for each activity in the PERT network is made, the next step is to estimate the "latest allowable time" T1 for each event. These are computed in the reverse order to that of Te. The difference between T1 and Te is called the "slack" of the event. A positive slack indicates an aheadof-schedule condition. It also implies excess resources having been used. A zero slack is an indication of an on-schedule condition and that just the right amount of resources have been used. A negative slack means a behind-schedule condition and that some resources are lacking. The value of the slack associated with an event indicates how critical that event may become. The smaller the slack, the more critical is the event. A path between the events in a network which is critical in terms of slack can be identified on this basis. This is called the critical path of the network and it is the one on which the manager must focus his
90
A MATTER OF CHANCE
attention. A critical path requires the least time to move from the initial to the final event. Any event on the critical path that slips in time will delay the completion of the project by the same amount. In a quenue: When you get to an airlines office, the reception first gives you a slip of paper with a number on it. You wait in a lounge until your number appears on a screen along with another number indicating the counter you should proceed to. This is an example of a modern queue and service system which is a common way of doing things these days. Time was if you wanted to purchase a railway ticket you fought your way aggressively through a crowd in front of a window, pushed your money through the window, if and when you reached it, and yelled the name of the station and how many tickets you wanted. Things have changed since then. Today, you see orderly queues at the computerised reservation counters. Waiting to be served has become popular all over the world and not, without reason. There are many similar situations where queues are formed. At international airports it is a common sight to see four or five large aircrafts waiting on the tarmac for their turn to use the runway for a take off. What is not often noticed is that planes arriving to land are assigned altitudes at which they are asked to fly in circles above the airport until it is their turn to land. In such cases, waiting in a queue can be an expensive proposition since precious fuel is being burnt and wasted. But, it is essential for avoiding accidents. It often happens when you attempt a trunk call using STD (subscriber trunk dialling) that you hear a message: "All lines are engaged — please call later". As a subscriber you are annoyed and wonder why they don't introduce more trunk lines. Unlike in a physical queue where you see people moving along and being served, there is nothing for you to observe in the telephone service. Also, you don't know when the trunk lines will become free! On the other hand, you go to an auto-rickshaw stand at peak hours and find none, but
CHANCE AND MANAGEMENT
91
Planes queuing up for landing
during slack hours like on a hot summer afternoon they are waiting to take you anywhere. Arrival times of customers in a bank are random events. When your turn comes you move up to the service counter and you are attended to. How long that takes depends on the complexity of your problem, and those of persons ahead of you in the queue, and the efficiency of the server. So, there are a number of probability distributions operating in such a system. If we go for simplification without taking recourse to complicated mathematics, let W n be the random time a customer at a bank has to wait to be served and let g n (t) be its probability density, where n is the number of servers. A knowledge of g n (t) for various n for an approximately known estimate of customers per day is very useful to the manage-
92
A MATTER OF CHANCE
merit of a bank. If n is too small, a lot of customers would have to wait for a long time to be served. This will make them impatient and they are likely to do their banking elsewhere. On the other hand, if n is too large, customers would be served immediately. However, many servers would be idle for much of the time and there would be needless expenditure for paying them to do nothing. Also, they may get bored and search for another job. In a factory a number of components arrive and wait their turn to be assembled into systems like TV sets or automobiles. If they have to wait too long in the store, then the factory has
Store
Raw material
Proces 1 '
Store
Processi
Store
Process 3 "
Store
Product
Keeping partially processed items in a store is like waiting in a queue
really spent on too many components. If they are consumed too quickly arid-thgre is a shortage, the factory will have to bear loss' roj., icncjune.-..Components arriving in the right n u m h ^ ^ a n d at fhe 'right'time is known as "Just in time" or JIT pMpsophy of factory-management.
CHANCE AND MANAGEMENT
93
These and many other situations crop up in everyday life, corporate sector and industry throughout the world. Queueing theory, in one form or another, helps in solving these problems. Some of these are simple while some are so complex that they can tie one up in knots with a surfeit of high mathematics.
T h e study of physics helps us understand several phenomena occurring around us everyday. It provides us with principles and laws that govern the relationships between space and time. In turn, it allows us to determine the cause and effect.
Uncertainties in Physics
Isaac NEWTON (1642-1727) studied these relationships and propounded laws that made it possible to predict the movements of bodies in motion precisely. For example, if a ball was rolled down an inclined groove or a bullet fired from a gun, then it is possible to say precisely how it would move through the groove or how far it would go or when it will be at a particular time. Refinements to these laws added by GALILEO (1564-1642) brought even greater precision and wider coverage of events. The accuracy of these predictions was impressive. This led hum a n k i n d to b e l i e v e that all natural phenomena are deterministic. They can be described or even foretold in a definitive manner. This was rather certainly true . with respect to common place events which can be observed clearly with the naked eye. Events where the distances or velocities involved are within the power of
UNCERTAINTIES IN PHYSICS
95
Time
• Velocity
Galileo showed that velocity of a ball rolling down a slope increased uniformly with time
measurement by man without the aid of highly sophisticated instruments fall in this category Soon, however, man turned his attention to events occurring in spheres beyond this. He wished to study phenomena that take place in outer space or inside atoms. At those levels he found that the certainty with which he could predict or describe the movement of concerned bodies deserted him. The distances at which celestial bodies were located were far beyond the reach of the measuring instruments available to him. Moreover, the velocities of these bodies were also so high that it was difficult to pinpoint them so that the velocity could be measured accurately. This was also the case at the atomic level. When two billiards balls are approaching each other on the table it is
96
A MATTER OF CHANCE
i
Predicting the velocity and direction of motion of billiard balls after collision is possible (above) unlike that of molecules of a gas (below)
UNCERTAINTIES IN PHYSICS
97
possible to precisely know in advance the result of their collision. Each ball moves with a velocity than can be measured accurately. Hence the momentum and energy that each carries is known accurately. The directions of their motion are also known. Hence, applying laws of conservation of energy and momentum their individual velocities and directions of motion after they have collided can be predicted. Would that be possible when one is looking at the molecules of a gas inside a container? This becomes impossible because of several reasons. First, there is a large number of gas molecules. Second, there is no order to their direction of movement. They all move in a totally random manner. Third, since the number is large they are constantly colliding with each other. The number of molecules colliding against one square centimetre area of the wall of the container would be of the order of a thousand billion. At every collision, the molecules surely obey laws of conservation of momentum and energy. But how much is the momentum of any single molecule? To know that one would have to measure the velocity of every single molecule with some degree of accuracy. That is not possible. Even if all the molecules started with say the same velocity, within a very short time there would be a large change on either side of that value because of the rapid and random collisions. It thus became clear that in several such situations it would not make sense to talk about the position or velocity of an individual particle. It would be more meaningful to describe the behaviour of the entire population. One can only talk of the probability of a molecule having a certain velocity. James Clerk MAXWELL (1831-1879) derived a mathematical expression to describe the spread of velocities of the gas molecule. He stated that the probability of a molecule having a certain velocity is dependent on the mass of the gas molecules and the temperature of the gas. The most probable value
A MATTER O F CHANCE
,0°C 100° C . 500°C 1000'C
Molecular
No. of molecules of given speed |
98
speed
James C. Maxwell first worked out the distribution of velocities for gases at different temperatures
of the velocity is low at low temperature and shift to a higher value as the temperature rises. This value for hydrogen gas at room temperature is 1500 m/sec while that for nitrogen gas is 500 m/sec. This spread of probabilities described by Maxwell's mathematical derivation came to be known as Maxwell's distribution. It is easily seen that it is not symmetrical. Though Maxwell had arrived at this expression based on theoretical considerations several experiments conducted subsequently provided substantiation for it. The peak of this curve represents the velocity which is the most probable. At lower temperature the most probable velocity is low. However, as the temperature rises there is a corresponding increase in the most probable velocity too. At room temperature the most probable velocity is 1500 m / s e c for hydrogen and 500 m / sec for nitrogen. It would be clear from these values that the size of the molecule also influences
UNCERTAINTIES IN PHYSICS
99
the value of the most probable velocity. Bigger and heavier molecules would naturally be moving sluggishly. Uncertainty and wave mechanics: A100 years ago physicists were quite content in thinking that light existed in the form of waves only. Whereas matter consisted exclusively of corpuscles or particles. The phenomena of diffraction, that is, bending of light waves around obstacles and interference patterns could easily be explained on the basis of light waves. However, around that time other naturally occurring phenomena like photoelectric effect came to be observed. Explaining them cogently by considering light to be made up of waves posed considerable difficulties. In fact, natural explanation became impossible. The situation could be resolved only if light could be thought of as consisting of discrete packets called quanta as propounded by the German physicist Max PLANCK (1858-1947) Planck's quantum theory revolutionized the realm of physics. Light came to be considered as possessing a dual character. It could be waves or particles. The latter are known as photons. In the year 1926, Louis de BROGLIE (1892-1987) a young French scientist propounded an extraordinary hypothesis that a dualism similar to wave and quanta, as present in light, may also be associated with matter. A materal particle such as a proton or an electron will have a matter wave corresponding to it, just as a light quantum has a light wave. This hypothesis was put to test by Clinton DAVISSON (1881-1958) and Lester GERMER (1896-1971) and they showed experimentally that when a beam of electrons is reflected by a crystal then diffraction patterns are observed. These are similar to patterns observed when X-rays are used instead of electron beams. These diffraction experiments show that classical mechanics is replaced by a new "wave mechanics". De Brogile's speculation was woven by Erwin SCHRODINGER (1887-1961) a German physicist, into a precise mathematical theory. He developed a wave equation for these "matter
100
A MATTER O F CHANCE
, Electron
gun.
Electron beam
Pinhole \ and crystal \ Phosphor screcn Diffraction pattern _
Clinton Davisson and Lester Germer proved the wave nature of electrons
waves" which could explain the diffraction of a beam of electrons and so on. The variable in these equations is a certain'!)/' which oscillates with time constituting the wave. When a long rope is given a sudden jerk, a wave is seen to move along the length of the rope. The variable y here is the displacement of the rop^ from its mean position. This is easy to appreciate but in the case of matter waves what can \\i possibly signify? What mysterious quantity does it represent? The interpretation was left vague for a while but later Max BORN (1882- 1970) another German scientist gave it a
UNCERTAINTIES IN PHYSICS
101
novel interpretation. A s o l u t i o n to the wave equation is called a wave packet. It has a finite amplitude over a region (however small) and is zero elsewhere. An individual particle is associated with these wave packets. Born suggested that p h y s i c a l significance is confined to the quantity v|/2 or to the s q u a r e of the w a v e Erwin Schrodinger formulated the packet's amplitude. theory of wave mechanics More exactly the probability of finding the particle is proportional to the magnitude of v|i1 at that point. When interpreted in this manner, \\i is called the "probability amplitude" for locating the particle. The total probability of finding the particle somewhere is, of course, unity. The introduction of the idea of probability to link together the wave and the corpuscular concepts is extremely ingenious indeed. A wave packet does not constitute a particle but does represent a particle confined to more or less a limited range in space. The probability of finding a particle is large within the wave packet and very small outside this region. From this probability point of view it is natural that the wave packet (or rather its centre) should move with the same velocity as that attributed to the particle. This it does with what is known as the group velocity. It can be shown that simultaneous assignment to a material particle of a definite position and a definite velocity (or momentum) - which is the usual way of defining its state in classical mechanics - is not possible in wave mechanics. This
102
A MATTER OF CHANCE
is known as the uncertainty p r i n c i p l e . It was first put forward by Werner HEISE N B E R G (1901-76). The indefiniteness T as to the posiAntinode tion can be minimised by Node making the wave packet to be zero everywhere except within a very n a r r o w small region. H o w Standing waves set up in a string ever, in that case, it means larger vagueness in its momentum and energy. An interesting outcome of the probabilistic interpretation to comes when the wave equation is solved for certain boundary conditions. If a rope is tightly held at its two ends and is then set into oscillation (by plucking it) then standing waves are set up. The amplitude of oscillations is zero at the two ends but there can be one, two, three or more peaks in between. These are shown as antinodes. On the other hand, waves do not necessarily have nodes at the two ends. They continue to have a diminishing magnitude beyond the boundaries — however small. In other words has an exponentially decaying magnitude on the other side of the boundary, This means-'tha^ there is a finite probability of finding the particle outside.whereas according to classical mechanics, ••• thero-should, be Aonfe. This phenomenon is known as the
UNCERTAINTIES IN PHYSICS
103
"tunnelling effect" and provides an explanation to radioactive emission from nuclei as well as to how the Esaki or the Tunnel diode operates. The latter devices are extremely useful in microwave electronics. An eminent physicist Felix Bloch expressed the situation in a verse: "Erwin with his psi can do calculations quite a few. But one thing has not been seen Just what psi really means."
W
Bio Probability
hen a child is born it has to be either a boy or a girl. So one would expect that when a lady expecting a child enters the maternity ward of a hospital, then there is as much probability of her delivering a male child as much as there is a female child. The a priori prob' ability is 0.5. It may surprise many readers to know that records of births from many hospitals all over the world indicate that more male children are born compared to female. Some surveys have shown that the probability of a male child being born is of the order of p=0.512 and for a female child it is q=0.488. The reason for such an imbalance, however small, is not known. The health care for children has improved enormously. At a very young age the children are immunized against diseases such as measles, polio and TB. They are administered vaccines either orally or by injecting them under the skin. These precautions enable children to develop antibodies which overcome the actual disease, if by chance they ever get such an infection. These vaccines are manufactured in very clean environments. They are sealed in special containers before being
BIO PROBABILITY
105
sent to the health centres. Some vaccines are kept at very low temperature to retain their potency, such as oral polio vaccine. Vaccines contain microbes causing a particular disease, but these are either killed or crippled so as not to cause disease. It is possible that during the manufacturing process some harmful particles of the live viruses may get through into the ultimate product. If the vaccine contains, say, m live viruses per millilitre (ml) and this constant is known, then in a large vat of volume V there will be n=mV live viruses present. The quality control department of the factory collects V ml of the vaccine and examines it to determine how many live viruses are present. The probability P(r) of r live viruses follows the Poisson distribution. If, in a vat of vaccine there are 5 viruses per 1000 ml, the average number of viruses is m= 5/1000 = .005. A sample of 600 ml is taken and tested for live viruses. It is observed that the probability of finding two or three virus particles is maximum in this particular case. When children are born the proud parents and relatives congratulate themselves when their best physical characteristics are passed on to their offspring. The father's curly hair, mother's grey eyes, grandfather's intelligence and similar characteristics are likely to be passed on from one generation to the next. Heredity is defined as transmission of such characteristics from parents to their children. All higher plants and animals are formed by the union of two specialised cells known as germ cells or gametes, one derived from each parent. The union of the two germ cells results in a zygote. This then grows and divides. This process continues till eventually an offspring is born. All living organisms are made up of two kinds of cells. The body cells which are found in the bones, flesh, skin, etc. which eventually die on account of old age. The gametes, on the other hand, are highly specialised cells and are associated
106
MATTER OF CHANCE
with the continuation of life. They pass on the hereditary characteristics from one generation to the next. Cells are made up of protoplasm and a nucleus. Chromosomes which contain hereditary material are located within the nucleus. These chromosomes are found in pairs. For example, a nucleus of a human cell contains 23 pairs of chromosomes. These chromosomes are vehicles of genes which are connected in sequence. They are actually coiled up strands of DNA molecules and contain the genetic code which decides the colour of the eyes, length of the nose, intelligence, etc. All body cells contain the same chromo-
BIO PROBABILITY
Sperms
107
XY
XX)
XY
X Y.
Y
X
X
X
Ova
Offsprings
XX
XX)
XX
XX
XY XY
Female 50%
XY XY Male 50%
Sex determination
somes and hence the same genes. The body cells duplicate by a process known as mitosis. The cells earmarked to produce reproductive cells divide in a manner which halves the number of chromosomes and the process is known as meiosis. The sex cells such as the
108
A MATTER OF CHANCE
sperm (in the male) or the egg (in the female) contain only one chromosome (and not a pair as in body cells). When the egg is fertilised, the single chromosomes from the two gametes join to form a zygote with 23 pairs of chromosomes. This zygote proceeds to multiply and eventually produces the offspring. The characteristics of the offspring depend on whether the gene from the male or female gamete is dominant. So, it is a matter of chance as to which characteristic comes from the mother's side and which from the father's. The 23rd pair of chromosomes comprises the sex chromosomes. In a female, this pair consists of two " X " chromosomes. The males have one X chromosome and another one of a different size which is called the " Y " chromosome. An egg from a female has just one X chromosome whereas the sperm from the male could have an X or a Y chromosome. If the egg is fertilized by a sperm with an X chromosome, then the resulting offspring is a female one. On the other hand, if the sperm has a Y chromosome then it results in a male child. Now millions of sperms are produced. Which one of them fertilizes the egg is a matter of chance and so also whether a male or a female child is produced as a result of such a union. Consider the mating of white, yellow and cream coloured guinea pigs, which are very popular animals for biological experiments. A yellow guinea pig has two yellow (Y) colour genes in the body cells, so that its colour is yellow and it can pass on only the gene for yellow (Y) colour to the next generation. Similarly, white guinea pigs have only white (W) colour genes. However, cream guinea pigs have one white and one yellow gene in the pair of chromosomes. These can produce gametes with either genes for yellow (Y) or white in equal numbers. If two cream coloured guinea pigs mate then they may have white, yellow or cream coloured offsprings. A white offspring can occur in only one of the four equally likely gamete pairings. Hence, the probability of a white offspring is 1 / 4 , of yellow offspring is 1 / 4 and of cream is 1/2. The
BIO PROBABILITY
109
White and coloured guinea pigs
I'l generation
Gametes
F1 generation Gametes
F2 generation
Colour by chance
probabilities derived in this manner are called equally likely probability as they are derived by deduction from the symmetric ratio of the experiment itself. The first systematic experiments on heredity were conducted by an Austrian monk, Johanne Gregor MENDEL (1822-84). He was connected with an ancient monastery with a large garden. He had plenty of leisure time to conduct
110
A MATTER OF CHANCE
Gregor Mendel and his pea plants
systematic experiments on the pea plants. During the period 1856-63 he experimented with over 30,000 pea plants and kept detailed records of his work. By concentrating on just a few contrasting features of the plant he determined what proportion each generation would receive and so was able to demonstrate specific patterns of inheritance.
laying games has been a natural human activity. The earliest games of all were probably races and trials of strength. These continue even to this day as athletic competitions. In ancient Rome, specially trained soldiers known as gladiators were called upon to fight hungry lions. This was witnessed by the Emperor as well as the public. A holiday was usually announced on the occasion. Such trials of brute strength were appreciated by everyone. If the gladiator won, he became a hero, earned his freedom and also came into a fortune. Just in case he lost, it was plain hard luck for him! The lion had a great feast. This game has changed with time and today we hear of bull fights in Spain and Mexico.
P
Chancy Games
We all play games. A healthy game of cricket or hockey is good for the mind as well as for the body. Competitive games of various kinds have been played by children as well as by grown-ups ever since human beings started living in communes many thousands of years ago. The outcome of an evenly matched game is uncertain until the last moment and this is what makes it so exciting.
112
A MATTER OF CHANCE
People today take the outcome of sports quite seriously. In many countries, an unexpected outcome of a football match often results in riots and considerable damage to property. The healthy aspects of games (both the indoor as well as outdoor varieties) are that they act as sources of relaxation and amusement. On the other hand, there are many people who gamble and place large sums of money as bets on the winning team. They hope to make a fortune with a little help from the mysterious "Lady Luck". Gambling is an age old pastime. People have been indulging in games of chance since the dawn of civilization. Wall paintings and hieroglyphic writings from within the ancient
A wall painting from an Egyptian tomb, c.2000 B.C.
pyramids in Egypt indicate that people gambled even in those far off days. Ceramic dice, complete with dots on their six sides have been found among the archaeological ruins of Mesopotamia, in the present-day Iraq and also in the Harappa ruins along the banks of the river Indus. The dice found in Mohenjodaro are made of clay. The numbers were indicated by dots — a tradition which continues even today. The number of dots are easier to make out than written numbers. The dice found in Rome were made of hollow bones. These were shaped and made to have a square crosssection; i n t e n d s were filled up by square pieces of bone.
113
CHANCY GAMES
A kingdom on the toss of a die
Such hollow dice are easier to be "loaded" by making one side heavier than the others. In our country, there are many famous epics which tell us of kings who gambled with dice and often lost their kingdoms. So it is not surprising that even today people amuse themselves'by winning or losing either something notional (points) or real money and even property Coming into a lot of money without actually working hard for it has always been a human weakness. In fact, gambling is often called 'the third instinct', coming after food and sex.
114
A MATTER OF CHANCE
Religious authorities in every civilization have all along frowned down upon gambling as an evil pastime. There are passages in all holy books against gambling. There are recorded evidences available of kings who outlawed gambling, but in private continued to do so themselves. An illustrious example relates to king Henry VIII of England who declared dicing and cards as "unlawful" for his subjects though not for himself. He continued playing cards (with stakes) every evening with his friends. In fact, one of his many wives won a fortune out of him before he divorced her! This habit of linking fortune to the toss of a die or the fall of a card is as popular today as it was a few centuries ago. Of these, the throw of a die has the longest history. Players would toss two or even three dice together. With two dice there are 36 possible outcomes, whereas with three there are 216. These outcomes are really matters of chance as they are all equiprobable. Now there was not a single professional gambler who wanted the outcome to be left to chance! They wanted a say in the possible outcome so as to increase their profits. That was the beginning of marked cards and loaded dice. It was known that cheating paid good dividends. Gambling was prevalent in Italy and France in the middle ages. It was at this stage when cheating had become rampant that kings and their mathematicians got interested in the subject. The mathematicians concerned themselves with the analysis of the games. They were curious to know who should win? What is the best move? What are the chances of certain random events? They also wanted to know when a game was fair and when it was not? These were natural questions of mathematical interest. It was a result of these studies that a new discipline in mathematics grew up and it came to be known as the "Probability Theory". One of those who gave considerable thought to gaming puzzles was an Italian by the name of Gerolamo C ARDANO (1501-76). He was a mathematician as well as an avid gambler. He studied the subject for many decades and then went
CHANCY GAMES 125
115
on to write a manuscript entitled "Games of Chance". He came to the conclusion that a fair game of chance was possible only if the die was honest. A dishonest die had weights inserted in them, so that when tossed it landed in a manner such that certain numbers came up more often than others. It is not surprising at all that this particular manuscript was suppressed for over a 100 years by vested interests and that it finally saw the light of day only in the year 1663. Galileo also wrote a book on games of chance which was all about probability. His work mainly related to the outcome on throwing two dice simultaneously. This he did on the basis
116
A MATTER OF CHANCE
of elaborate explanations. It is so much easier to do so today on the basis of the set theory and sample s p a c e s . P a s c a l and P i e r r e de F E R M A T (1601-65) in France also published arguments related to probability. They exchanged letters on the subject beginning in the year 1654. It is indeed Pierre de Fermat—a pioneer of probability a matter of surprise theory that probability as a subject of mathematics did not emerge until 1660, although random phenomena such as those arising in games of chance had always been present in our society and environment. The Dutch people were always business-like. Four hundred years ago they were interested in knowing the chances of their ships loaded with cargo reaching home ports safely. Christian HUYGENS (1629-95) in Holland investigated problems related to insurance and also on how long people survived in those troublesome days of war and turmoil. He too wrote a book on the subject of probability which was published in 1657. Gambling continues to be popular today as it was in the past. Playing cards and games of dice (with stakes) are common during the Diwali festival in our country. On the other hand, there are some cities in the world which include Las Vegas in the USA and Monte Carlo in southern France where gambling is highly organised and it keeps the economy of the towns going from strength to strength. People flock to such centers, have fun and participate in games of chance. These
CHANCY GAMES
117
games are managed in such a manner that the casinos and the city corporations are the ultimate beneficiaries. A popular game at the casinos is the Roulette described earlier. Another dangerous a n d e v i l g a m e (?) which originated in E a s t e r n E u r o p e , is known as the Russian Roulette. A revolver Christian Huygens worked on probability with six chambers is related to insurance loaded with just one bullet. A player spins the chamber and when it comes to a halt he places the gun against his ear and press the trigger. The probability is 1 / 6 or 0.167 that the gun fires and the gambler finds his way to the nearest graveyard. There is, however, a probability of 5 / 6 or 0.833 that he misses. The gun is then ceremoniously passed on to the next player (the players are usually seated around a circular table) who takes his turn. The gun does the rounds until some one becomes its unfortunate victim. Horse racing is yet another sport popular with gamblers. It is a beautiful sight to watch the sleek thoroughbred horses tearing down the race track. However, there are addicts who love going to the "races" not to appreciate the horses running but to lose money. A few do end up with a tidy profit at the end of the day but usually they are in a minority. The list of horses participating in a particular race is usually announced well in advance.They are "handicapped" or made to carry different weights according to their calibre, so that theoretically no horse enjoys any advantage over the others. Some of these races are prestigious ones, called "Clas-
118
A MATTER OF CHANCE
sics", with large prize moneys and a gold cup as a reward to the owner of the winning horse. The general public approach a bookmaker (or a bookie as he is popularly known) and place bets with him. They hand over their stake money to the bookie and obtain a receipt or a ticket. A horse which is a rank outsider, that is one which is least expected to win, has very large odds in its favour, say, 50 to 1. If per chance that particular horse happens to win then the lucky customer receives Rs 50 for every rupee he has staked in the form of a bet. It goes without saying that if the horse loses (as is most likely to happen) the bookie keeps the stake money. However, as very few outsiders are likely to win the races, and when one does the bookmaker keeps the money staked on all other horses; an outsider is the bookmaker's darling.
119
CHANCY GAMES
A bookmaker at his stall
A favourite horse is usually the one which has won many races in the recent past and has an advantage in handicap. It has smaller odds in its favour, say, 2 to 1 or even 4 to 6, that is "odds on". In such a case, the horse is very likely to win but the gambler receives two times the stake money in the first case and only 0.66 of it in the second case. Odds on horses are called "racing certainties" with probabilities of success tending to be 1 or 100%. On the other hand, placing bets on the winners of a combination of races (jackpot, etc.) is really
120
A MATTER OF CHANCE
chancy and there is poor probability of winning. But a lot of money can be won on these bets and that inspires gamblers to stake their money in a large number of combinations to increase their chance. The effect of international football matches on people's life in cities where such matches are played is well known. People gamble not so much on individual matches but on local football pools. There are a large number of football matches which are played over the weekends. People send in their coupons and the betting money by post to a centralized pool office. They tick off (in anticipation) the names of those teams which they expect to win. Depending on whether they have the correct combinations of winners marked on their coupons, they either win large sum of money or lose their investment. Invariably, it is the latter case but the lure of making easy money is so great that week after week they send in their coupons and the betting money!
Glossary Antibodies: These are protective protein molecules produced by the body in response to invasion by a foreign substance. DNA: Deoxyribose nucleic acid is the basic molecule of life that forms the 'blueprint' or hereditary material of almost all living organisms. It comprises four nitrogenous bases, deoxyribose sugar and phosphate molecules. The genetic information coded in this molecule is given by a specific sequence of bases. Histogram: An experiment has a range of possible outcomes. This range is divided into a number of groups. The relative frequency of the outcomes (or percentages) for each group is estimated. A graphical representation of data in this format is known as a histogram. Nucleus: Present in nearly all cells of higher plants and animals including man, it is the central body containing the chromosomes — the vehicles of hereditary material, DNA. Protoplasm: Substance present within a cell including the outer plasma membrane. Standard derivation: When an experiment is performed a large number of times, its frequency distribution invariably takes the form of a bell-shaped curve. Its highest frequency is its mean, m. The standard deviation, o, is such that 68% of the outcomes are within the range (m+a) and (m-o).
EXTENSIVE
is the scope of the theory of probability. It covers all disciplines of science, management of business, industry and personnel, outdoor and indoor games, and even to a large extent our day-to-day lives. In fact, in any situation involving uncertainty, chance or 'luck' probability has a role to play. Several elementary, yet important, concepts of probability like discrete and continuous variables, Poisson, normal and binomial distributions, and their usefulness in various fields are dealt with in the book. Some otherwise difficult concepts like Markov process, Bayes theorem, Bernoulli trials and some Operations Research techniques have also been explained clearly. Interesting examples and analogies from day-to-day life enhance the readability of the book. This lavishly illustrated book introduces the concept of probability, its intricacies and its wide range of applications to the non-specialist reader in a lucid language, And, while doing so, it explains the science underlying the outcome of events which appear to occur as a matter of chance. About the Author K.D. Pavate (b.1930), after graduating with Physics and Mathematics from Fergusson College, Pune studied Physics at the Cavendish Laboratory, University of Cambridge. For the next three years he worked with the Metropolitan-Vickers Electrical Co. Ltd., Manchester. On returning to India, he joined the Central Electronics Engineering Research Institute (CEERI), Pilani. In 1976, he moved over to Delhi a s the head of the newly formed CEERI Centre. Here he established R & D groups to work on various user-oriented aspects of electronics. Pavate's interest in recent years has shifted to science communication and writing articles on science and technology. He has already written two popular science books, entitled Artificial Intelligence and Information Highways. A matter of chance is his third book of the same genre.
9 ISBN: 81-7236-133-5